Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
NASA Astrophysics Data System (ADS)
Hunter, Geoffrey
2004-01-01
A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram"s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program"s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.
Probabilistic resource allocation system with self-adaptive capability
NASA Technical Reports Server (NTRS)
Yufik, Yan M. (Inventor)
1996-01-01
A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and directed links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Reliability values are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.
Probabilistic resource allocation system with self-adaptive capability
NASA Technical Reports Server (NTRS)
Yufik, Yan M. (Inventor)
1998-01-01
A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and weighted links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Weights are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
2013-12-01
authors present a Computing on Dissemination with predictable contacts ( pCoD ) algorithm, since it is impossible to reserve task execution time in advance...Computing While Charging DAG Directed Acyclic Graph 18 TTL Time-to-live pCoD Predictable contacts CoD Computing on Dissemination upCoD Unpredictable
ERIC Educational Resources Information Center
Possen, Uri M.; And Others
As an introduction, this paper presents a statement of the objectives of the university computing center (UCC) from the viewpoint of the university, the government, the typical user, and the UCC itself. The operating and financial structure of a UCC are described. Three main types of budgeting schemes are discussed: time allocation, pseudo-dollar,…
Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng
2018-04-13
Aiming to minimize the damage caused by river chemical spills, efficient emergency material allocation is critical for an actual emergency rescue decision-making in a quick response. In this study, an emergency material allocation framework based on time-varying supply-demand constraint is developed to allocate emergency material, minimize the emergency response time, and satisfy the dynamic emergency material requirements in post-accident phases dealing with river chemical spills. In this study, the theoretically critical emergency response time is firstly obtained for the emergency material allocation system to select a series of appropriate emergency material warehouses as potential supportive centers. Then, an enumeration method is applied to identify the practically critical emergency response time, the optimum emergency material allocation and replenishment scheme. Finally, the developed framework is applied to a computational experiment based on south-to-north water transfer project in China. The results illustrate that the proposed methodology is a simple and flexible tool for appropriately allocating emergency material to satisfy time-dynamic demands during emergency decision-making. Therefore, the decision-makers can identify an appropriate emergency material allocation scheme in a balance between time-effective and cost-effective objectives under the different emergency pollution conditions.
A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.
Wang, Lujia; Liu, Ming; Meng, Max Q-H
2017-02-01
Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.
NASA Astrophysics Data System (ADS)
Kim, Gi Young
The problem we investigate deals with an Image Intelligence (IMINT) sensor allocation schedule for High Altitude Long Endurance UAVs in a dynamic and Anti-Access Area Denial (A2AD) environment. The objective is to maximize the Situational Awareness (SA) of decision makers. The value of SA can be improved in two different ways. First, if a sensor allocated to an Areas of Interest (AOI) detects target activity, then the SA value will be increased. Second, the SA value increases if an AOI is monitored for a certain period of time, regardless of target detections. These values are functions of the sensor allocation time, sensor type and mode. Relatively few studies in the archival literature have been devoted to an analytic, detailed explanation of the target detection process, and AOI monitoring value dynamics. These two values are the fundamental criteria used to choose the most judicious sensor allocation schedule. This research presents mathematical expressions for target detection processes, and shows the monitoring value dynamics. Furthermore, the dynamics of target detection is the result of combined processes between belligerent behavior (target activity) and friendly behavior (sensor allocation). We investigate these combined processes and derive mathematical expressions for simplified cases. These closed form mathematical models can be used for Measures of Effectiveness (MOEs), i.e., target activity detection to evaluate sensor allocation schedules. We also verify these models with discrete event simulations which can also be used to describe more complex systems. We introduce several methodologies to achieve a judicious sensor allocation schedule focusing on the AOI monitoring value. The first methodology is a discrete time integer programming model which provides an optimal solution but is impractical for real world scenarios due to its computation time. Thus, it is necessary to trade off the quality of solution with computation time. The Myopic Greedy Procedure (MGP) is a heuristic which chooses the largest immediate unit time return at each decision epoch. This reduces computation time significantly, but the quality of the solution may be only 95% of optimal (for small size problems). Another alternative is a multi-start random constructive Hybrid Myopic Greedy Procedure (H-MGP), which incorporates stochastic variation in choosing an action at each stage, and repeats it a predetermined number of times (roughly 99.3% of optimal with 1000 repetitions). Finally, the One Stage Look Ahead (OSLA) procedure considers all the 'top choices' at each stage for a temporary time horizon and chooses the best action (roughly 98.8% of optimal with no repetition). Using OSLA procedure, we can have ameliorated solutions within a reasonable computation time. Other important issues discussed in this research are methodologies for the development of input parameters for real world applications.
Data distribution method of workflow in the cloud environment
NASA Astrophysics Data System (ADS)
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Huang, Jie; Zeng, Xiaoping; Jian, Xin; Tan, Xiaoheng; Zhang, Qi
2017-01-01
The spectrum allocation for cognitive radio sensor networks (CRSNs) has received considerable research attention under the assumption that the spectrum environment is static. However, in practice, the spectrum environment varies over time due to primary user/secondary user (PU/SU) activity and mobility, resulting in time-varied spectrum resources. This paper studies resource allocation for chunk-based multi-carrier CRSNs with time-varied spectrum resources. We present a novel opportunistic capacity model through a continuous time semi-Markov chain (CTSMC) to describe the time-varied spectrum resources of chunks and, based on this, a joint power and chunk allocation model by considering the opportunistically available capacity of chunks is proposed. To reduce the computational complexity, we split this model into two sub-problems and solve them via the Lagrangian dual method. Simulation results illustrate that the proposed opportunistic capacity-based resource allocation algorithm can achieve better performance compared with traditional algorithms when the spectrum environment is time-varied. PMID:28106803
Real time target allocation in cooperative unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kudleppanavar, Ganesh
The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.
NASA Technical Reports Server (NTRS)
Bradley, D. B.; Irwin, J. D.
1974-01-01
A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.
System Resource Allocations | High-Performance Computing | NREL
Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation
Fog computing job scheduling optimization based on bees swarm
NASA Astrophysics Data System (ADS)
Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid
2018-04-01
Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amjad Majid; Albert, Don; Andersson, Par
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.
Task allocation in a distributed computing system
NASA Technical Reports Server (NTRS)
Seward, Walter D.
1987-01-01
A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
Computer-Assisted Instruction: One Aid for Teachers of Reading.
ERIC Educational Resources Information Center
Rauch, Margaret; Samojeden, Elizabeth
Computer assisted instruction (CAI), an instructional system with direct interaction between the student and the computer, can be a valuable aid for presenting new concepts, for reinforcing of selective skills, and for individualizing instruction. The advantages CAI provides include self-paced learning, more efficient allocation of classroom time,…
Computer software tool REALM for sustainable water allocation and management.
Perera, B J C; James, B; Kularathna, M D U
2005-12-01
REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.
Analog Processor To Solve Optimization Problems
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.
1993-01-01
Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.
1990-01-01
the six fields will have two million cell locations. The table below shows the total allocation of 392 chips across fields and banks. To allow for...future growth, we allocate 16 wires for addressing both the rows and columns. eU 4 MBit locations bytes bits Chips (millions) (millions) (millions) per...sources apt to appear in most problems. If material parameters change during a run, then time must be allocated to read these constants into their
Investigation of Optimal Control Allocation for Gust Load Alleviation in Flight Control
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Bodson, Marc
2012-01-01
Advances in sensors and avionics computation power suggest real-time structural load measurements could be used in flight control systems for improved safety and performance. A conventional transport flight control system determines the moments necessary to meet the pilot's command, while rejecting disturbances and maintaining stability of the aircraft. Control allocation is the problem of converting these desired moments into control effector commands. In this paper, a framework is proposed to incorporate real-time structural load feedback and structural load constraints in the control allocator. Constrained optimal control allocation can be used to achieve desired moments without exceeding specified limits on monitored load points. Minimization of structural loads by the control allocator is used to alleviate gust loads. The framework to incorporate structural loads in the flight control system and an optimal control allocation algorithm will be described and then demonstrated on a nonlinear simulation of a generic transport aircraft with flight dynamics and static structural loads.
Cellular trade-offs and optimal resource allocation during cyanobacterial diurnal growth
Knoop, Henning; Bockmayr, Alexander; Steuer, Ralf
2017-01-01
Cyanobacteria are an integral part of Earth’s biogeochemical cycles and a promising resource for the synthesis of renewable bioproducts from atmospheric CO2. Growth and metabolism of cyanobacteria are inherently tied to the diurnal rhythm of light availability. As yet, however, insight into the stoichiometric and energetic constraints of cyanobacterial diurnal growth is limited. Here, we develop a computational framework to investigate the optimal allocation of cellular resources during diurnal phototrophic growth using a genome-scale metabolic reconstruction of the cyanobacterium Synechococcus elongatus PCC 7942. We formulate phototrophic growth as an autocatalytic process and solve the resulting time-dependent resource allocation problem using constraint-based analysis. Based on a narrow and well-defined set of parameters, our approach results in an ab initio prediction of growth properties over a full diurnal cycle. The computational model allows us to study the optimality of metabolite partitioning during diurnal growth. The cyclic pattern of glycogen accumulation, an emergent property of the model, has timing characteristics that are in qualitative agreement with experimental findings. The approach presented here provides insight into the time-dependent resource allocation problem of phototrophic diurnal growth and may serve as a general framework to assess the optimality of metabolic strategies that evolved in phototrophic organisms under diurnal conditions. PMID:28720699
Allocation Usage Tracking and Management | High-Performance Computing |
NREL's high-performance computing (HPC) systems, learn how to track and manage your allocations. The alloc_tracker script (/usr/local/bin/alloc_tracker) may be used to see what allocations you have access to, how much of the allocation has been used, how much remains and how many node hours will be forfeited at the
Range wise busy checking 2-way imbalanced algorithm for cloudlet allocation in cloud environment
NASA Astrophysics Data System (ADS)
Alanzy, Mohammed; Latip, Rohaya; Muhammed, Abdullah
2018-05-01
Cloud computing considers as a new business paradigm and a popular platform over the last few years. Many organizations, agencies, and departments consider responsible tasks time and tasks needed to be accomplished as soon as possible. These agencies counter IT issues due to the massive arise of data, applications, and solution scopes. Currently, the main issue related with the cloud is the way of making the environment of the cloud computing more qualified, and this way needs a competent allocation strategy of the cloudlet, Thus, there are huge number of studies conducted with regards to this matter that sought to assign the cloudlets to VMs or resources by variety of strategies. In this paper we have proposed range wise busy checking 2-way imbalanced Algorithm in cloud computing. Compare to other methods, it decreases the completion time to finish tasks’ execution, it is considered the fundamental part to enhance the system performance such as the makespan. This algorithm was simulated using Cloudsim to give more opportunity to the higher VM speed to accommodate more Cloudlets in its local queue without considering the threshold balance condition. The simulation result shows that the average makespan time is lesser compare to the previous cloudlet allocation strategy.
Stine-Morrow, Elizabeth A. L.; Noh, Soo Rim; Shake, Matthew C.
2009-01-01
This research examined age differences in the accommodation of reading strategies as a consequence of explicit instruction in conceptual integration. In Experiment 1, young, middle-aged, and older adults read sentences for delayed recall using a moving window method. Readers in an experimental group received instruction in making conceptual links during reading while readers in a control group were simply encouraged to allocate effort. Regression analysis to decompose word-by-word reading times in each condition isolated the time allocated to conceptual processing at the point in the text at which new concepts were introduced, as well as at clause and sentence boundaries. While younger adults responded to instructions by differentially allocating effort to sentence wrap-up, older adults allocated effort to intrasentence wrap-up and on new concepts as they were introduced, suggesting that older readers optimized their allocation of effort to linguistic computations for textbase construction within their processing capacity. Experiment 2 verified that conceptual integration training improved immediate recall among older readers as a consequence of engendering allocation to conceptual processing. PMID:19941199
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Sort-Mid tasks scheduling algorithm in grid computing
Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.
2014-01-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
Arranging computer architectures to create higher-performance controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1988-01-01
Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.
Machine learning based Intelligent cognitive network using fog computing
NASA Astrophysics Data System (ADS)
Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik
2017-05-01
In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
NASA Technical Reports Server (NTRS)
Chu, Y.-Y.; Rouse, W. B.
1979-01-01
As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.
Identifying Memory Allocation Patterns in HEP Software
NASA Astrophysics Data System (ADS)
Kama, S.; Rauschmayr, N.
2017-10-01
HEP applications perform an excessive amount of allocations/deallocations within short time intervals which results in memory churn, poor locality and performance degradation. These issues are already known for a decade, but due to the complexity of software frameworks and billions of allocations for a single job, up until recently no efficient mechanism has been available to correlate these issues with source code lines. However, with the advent of the Big Data era, many tools and platforms are now available to do large scale memory profiling. This paper presents, a prototype program developed to track and identify each single (de-)allocation. The CERN IT Hadoop cluster is used to compute memory key metrics, like locality, variation, lifetime and density of allocations. The prototype further provides a web based visualization back-end that allows the user to explore the results generated on the Hadoop cluster. Plotting these metrics for every single allocation over time gives a new insight into application’s memory handling. For instance, it shows which algorithms cause which kind of memory allocation patterns, which function flow causes how many short-lived objects, what are the most commonly allocated sizes etc. The paper will give an insight into the prototype and will show profiling examples for the LHC reconstruction, digitization and simulation jobs.
Mira: Argonne's 10-petaflops supercomputer
Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul
2018-02-13
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Mira: Argonne's 10-petaflops supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papka, Michael; Coghlan, Susan; Isaacs, Eric
2013-07-03
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
29 CFR 95.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., have the right of timely and unrestricted access to any books, documents, papers, or other records of... allocation plans, and any similar accounting computations of the rate at which a particular group of costs is... of the fiscal year (or other accounting period) covered by the proposal, plan, or other computation...
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-01-01
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753
Wenger, Nathalie; Méan, Marie; Castioni, Julien; Marques-Vidal, Pedro; Waeber, Gérard; Garnier, Antoine
2017-04-18
Little current evidence documents how internal medicine residents spend their time at work, particularly with regard to the proportions of time spent in direct patient care versus using computers. To describe how residents allocate their time during day and evening hospital shifts. Time and motion study. Internal medicine residency at a university hospital in Switzerland, May to July 2015. 36 internal medicine residents with an average of 29 months of postgraduate training. Trained observers recorded the residents' activities using a tablet-based application. Twenty-two activities were categorized as directly related to patients, indirectly related to patients, communication, academic, nonmedical tasks, and transition. In addition, the presence of a patient or colleague and use of a computer or telephone during each activity was recorded. Residents were observed for a total of 696.7 hours. Day shifts lasted 11.6 hours (1.6 hours more than scheduled). During these shifts, activities indirectly related to patients accounted for 52.4% of the time, and activities directly related to patients accounted for 28.0%. Residents spent an average of 1.7 hours with patients, 5.2 hours using computers, and 13 minutes doing both. Time spent using a computer was scattered throughout the day, with the heaviest use after 6:00 p.m. The study involved a small sample from 1 institution. At this Swiss teaching hospital, internal medicine residents spent more time at work than scheduled. Activities indirectly related to patients predominated, and about half the workday was spent using a computer. Information Technology Department and Department of Internal Medicine of Lausanne University Hospital.
Dynamic Transfers Of Tasks Among Computers
NASA Technical Reports Server (NTRS)
Liu, Howard T.; Silvester, John A.
1989-01-01
Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Dawn Usage, Scheduling, and Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louis, S
2009-11-02
This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less
Emergent Leadership and Team Effectiveness on a Team Resource Allocation Task
1987-10-01
equivalent training and experience on this task, but they had different levels of experience with computers and video games . This differential experience...typed: that is. it is sex-typed to the extent that males spend mnore time on related instrumeuts like computers and video games . However. the sex...perform better or worse than less talkative teams? Did teams with much computer and ’or video game experience perform better than inexperienced teams
NASA Astrophysics Data System (ADS)
Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi
2015-01-01
We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.
Comparison of OPC job prioritization schemes to generate data for mask manufacturing
NASA Astrophysics Data System (ADS)
Lewis, Travis; Veeraraghavan, Vijay; Jantzen, Kenneth; Kim, Stephen; Park, Minyoung; Russell, Gordon; Simmons, Mark
2015-03-01
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
Hamdan, Sadeque; Cheaitou, Ali
2017-08-01
This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
Tactical resource allocation and elective patient admission planning in care processes.
Hulshof, Peter J H; Boucherie, Richard J; Hans, Erwin W; Hurink, Johann L
2013-06-01
Tactical planning of resources in hospitals concerns elective patient admission planning and the intermediate term allocation of resource capacities. Its main objectives are to achieve equitable access for patients, to meet production targets/to serve the strategically agreed number of patients, and to use resources efficiently. This paper proposes a method to develop a tactical resource allocation and elective patient admission plan. These tactical plans allocate available resources to various care processes and determine the selection of patients to be served that are at a particular stage of their care process. Our method is developed in a Mixed Integer Linear Programming (MILP) framework and copes with multiple resources, multiple time periods and multiple patient groups with various uncertain treatment paths through the hospital, thereby integrating decision making for a chain of hospital resources. Computational results indicate that our method leads to a more equitable distribution of resources and provides control of patient access times, the number of patients served and the fraction of allocated resource capacity. Our approach is generic, as the base MILP and the solution approach allow for including various extensions to both the objective criteria and the constraints. Consequently, the proposed method is applicable in various settings of tactical hospital management.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
Econophysics of a ranked demand and supply resource allocation problem
NASA Astrophysics Data System (ADS)
Priel, Avner; Tamir, Boaz
2018-01-01
We present a two sided resource allocation problem, between demands and supplies, where both parties are ranked. For example, in Big Data problems where a set of different computational tasks is divided between a set of computers each with its own resources, or between employees and employers where both parties are ranked, the employees by their fitness and the employers by their package benefits. The allocation process can be viewed as a repeated game where in each iteration the strategy is decided by a meta-rule, based on the ranks of both parties and the results of the previous games. We show the existence of a phase transition between an absorbing state, where all demands are satisfied, and an active one where part of the demands are always left unsatisfied. The phase transition is governed by the ratio between supplies and demand. In a job allocation problem we find positive correlation between the rank of the workers and the rank of the factories; higher rank workers are usually allocated to higher ranked factories. These all suggest global emergent properties stemming from local variables. To demonstrate the global versus local relations, we introduce a local inertial force that increases the rank of employees in proportion to their persistence time in the same factory. We show that such a local force induces non trivial global effects, mostly to benefit the lower ranked employees.
Holding-time-aware asymmetric spectrum allocation in virtual optical networks
NASA Astrophysics Data System (ADS)
Lyu, Chunjian; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Virtual optical networks (VONs) have been considered as a promising solution to support current high-capacity dynamic traffic and achieve rapid applications deployment. Since most of the network services (e.g., high-definition video service, cloud computing, distributed storage) in VONs are provisioned by dedicated data centers, needing different amount of bandwidth resources in both directions, the network traffic is mostly asymmetric. The common strategy, symmetric provisioning of traffic in optical networks, leads to a waste of spectrum resources in such traffic patterns. In this paper, we design a holding-time-aware asymmetric spectrum allocation module based on SDON architecture and an asymmetric spectrum allocation algorithm based on the module is proposed. For the purpose of reducing spectrum resources' waste, the algorithm attempts to reallocate the idle unidirectional spectrum slots in VONs, which are generated due to the asymmetry of services' bidirectional bandwidth. This part of resources can be exploited by other requests, such as short-time non-VON requests. We also introduce a two-dimensional asymmetric resource model for maintaining idle spectrum resources information of VON in spectrum and time domains. Moreover, a simulation is designed to evaluate the performance of the proposed algorithm, and results show that our proposed asymmetric spectrum allocation algorithm can improve the resource waste and reduce blocking probability.
Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh
2014-03-01
As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
OxMaR: open source free software for online minimization and randomization for clinical trials.
O'Callaghan, Christopher A
2014-01-01
Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Hestbeck, J.B.; Nichols, J.D.; Hines, J.E.
1992-01-01
Predictions of the time-allocation hypothesis were tested with several a posteriori analyses of banding data for the mallard (Anas platyrhynchos). The time-allocation hypothesis states that the critical difference between resident and migrant birds is their allocation of time to reproduction on the breeding grounds and survival on the nonbreeding grounds. Residents have higher reproduction and migrants have higher survival. Survival and recovery rates were estimated by standard band-recovery methods for banding reference areas in the central United States and central Canada. A production-rate index was computed for each reference area with data from the U.S. Fish and Wildlife Service May Breeding Population Survey and July Production Survey. An analysis of covariance was used to test for the effects of migration distance and time period (decade) on survival, recovery, and production rates. Differences in migration chronology were tested by comparing direct-recovery distributions for different populations during the fall migration. Differences in winter locations were tested by comparing distributions of direct recoveries reported during December and January. A strong positive relationship was found between survival rate, and migration distance for 3 of the 4 age and sex classes. A weak negative relationship was found between recovery rate and migration distance. No relationship was found between production rate and migration distance. During the fall migration, birds from the northern breeding populations were located north of birds from the southern breeding populations. No pattern could be found in the relative locations of breeding and wintering areas. Although our finding that survival rate increased with migration distance was consistent with the time-allocation hypothesis, our results on migration chronology and location of wintering areas were not consistent with the mechanism underlying the time-allocation hypothesis. Neither this analysis nor other recent studies of life-history characteristics of migratory and resident birds supported the timeallocation hypothesis.
7 CFR 761.205 - Computing the formula allocation.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS GENERAL PROGRAM ADMINISTRATION Allocation of Farm Loan Programs... held in the National Office reserve and distributed by base and administrative allocation, multiplied... allocation−national reserve−base allocation−administrative allocation) × State Factor (b) To calculate the...
NASA Technical Reports Server (NTRS)
Chu, Y. Y.
1978-01-01
A unified formulation of computer-aided, multi-task, decision making is presented. Strategy for the allocation of decision making responsibility between human and computer is developed. The plans of a flight management systems are studied. A model based on the queueing theory was implemented.
Scheduling with Automatic Resolution of Conflicts
NASA Technical Reports Server (NTRS)
Clement, Bradley; Schaffer, Steve
2006-01-01
DSN Requirement Scheduler is a computer program that automatically schedules, reschedules, and resolves conflicts for allocations of resources of NASA s Deep Space Network (DSN) on the basis of ever-changing project requirements for DSN services. As used here, resources signifies, primarily, DSN antennas, ancillary equipment, and times during which they are available. Examples of project-required DSN services include arraying, segmentation, very-long-baseline interferometry, and multiple spacecraft per aperture. Requirements can include periodic reservations of specific or optional resources during specific time intervals or within ranges specified in terms of starting times and durations. This program is built on the Automated Scheduling and Planning Environment (ASPEN) software system (aspects of which have been described in previous NASA Tech Briefs articles), with customization to reflect requirements and constraints involved in allocation of DSN resources. Unlike prior DSN-resource- scheduling programs that make single passes through the requirements and require human intervention to resolve conflicts, this program makes repeated passes in a continuing search for all possible allocations, provides a best-effort solution at any time, and presents alternative solutions among which users can choose.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Duncan, Lesley A; Park, Justin H; Faulkner, Jason; Schaller, Mark; Neuberg, Steven L; Kenrick, Douglas T
2007-09-01
We tested the hypothesis that, compared with sociosexually restricted individuals, those with an unrestricted approach to mating would selectively allocate visual attention to attractive opposite-sex others. We also tested for sex differences in this effect. Seventy-four participants completed the Sociosexual Orientation Inventory, and performed a computer-based task that assessed the speed with which they detected changes in attractive and unattractive male and female faces. Differences in reaction times served as indicators of selective attention. Results revealed a Sex X Sociosexuality interaction: Compared with sociosexually restricted men, unrestricted men selectively allocated attention to attractive opposite-sex others; no such effect emerged among women. This finding was specific to opposite-sex targets and did not occur in attention to same-sex others. These results contribute to a growing literature on the adaptive allocation of attention in social environments.
1987-01-01
after the MYCIN expert system. Host Computer PC+ is available on both symbolic and numeric computers. It operates on: the IBM PC AT, TI Bus- Pro (IBM PC...suppose that the data baseTool picks up pace contains 100 motors, and in only one case does a lightweight motor pro . duce more power than heavier units...every sor, ART 2.0. In the bargain it con - the figure). decision point takes time. More sub- sumes 10 times less storage. ART 3.0 reduces the comparison
Efficient Computing Budget Allocation for Finding Simplest Good Designs
Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung
2012-01-01
In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...
2017-08-29
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
A scheduling model for the aerial relay system
NASA Technical Reports Server (NTRS)
Ausrotas, R. A.; Liu, E. W.
1980-01-01
The ability of the Aerial Relay System to handle the U.S. transcontinental large hub passenger flow was analyzed with a flexible, interactive computer model. The model incorporated city pair time of day demand and a demand allocation function which assigned passengers to their preferred flights.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Stochastic optimisation of water allocation on a global scale
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.
2014-05-01
Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.
Effect of the lung allocation score on lung transplantation in the United States.
Egan, Thomas M; Edwards, Leah B
2016-04-01
On May 4, 2005, the system for allocation of deceased donor lungs for transplant in the United States changed from allocation based on waiting time to allocation based on the lung allocation score (LAS). We sought to determine the effect of the LAS on lung transplantation in the United States. Organ Procurement and Transplantation Network data on listed and transplanted patients were analyzed for 5 calendar years before implementation of the LAS (2000-2004), and compared with data from 6 calendar years after implementation (2006-2011). Counts were compared between eras using the Wilcoxon rank sum test. The rates of transplant increase within each era were compared using an F-test. Survival rates computed using the Kaplan-Meier method were compared using the log-rank test. After introduction of the LAS, waitlist deaths decreased significantly, from 500/year to 300/year; the number of lung transplants increased, with double the annual increase in rate of lung transplants, despite no increase in donors; the distribution of recipient diagnoses changed dramatically, with significantly more patients with fibrotic lung disease receiving transplants; age of recipients increased significantly; and 1-year survival had a small but significant increase. Allocating lungs for transplant based on urgency and benefit instead of waiting time was associated with fewer waitlist deaths, more transplants performed, and a change in distribution of recipient diagnoses to patients more likely to die on the waiting list. Copyright © 2016 International Society for Heart and Lung Transplantation. All rights reserved.
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Multiplexing technique for computer communications via satellite channels
NASA Technical Reports Server (NTRS)
Binder, R.
1975-01-01
Multiplexing scheme combines technique of dynamic allocation with conventional time-division multiplexing. Scheme is designed to expedite short-duration interactive or priority traffic and to delay large data transfers; as result, each node has effective capacity of almost total channel capacity when other nodes have light traffic loads.
Robust Inversion and Data Compression in Control Allocation
NASA Technical Reports Server (NTRS)
Hodel, A. Scottedward
2000-01-01
We present an off-line computational method for control allocation design. The control allocation function delta = F(z)tau = delta (sub 0) (z) mapping commanded body-frame torques to actuator commands is implicitly specified by trim condition delta (sub 0) (z) and by a robust pseudo-inverse problem double vertical line I - G(z) F(z) double vertical line less than epsilon (z) where G(z) is a system Jacobian evaluated at operating point z, z circumflex is an estimate of z, and epsilon (z) less than 1 is a specified error tolerance. The allocation function F(z) = sigma (sub i) psi (z) F (sub i) is computed using a heuristic technique for selecting wavelet basis functions psi and a constrained least-squares criterion for selecting the allocation matrices F (sub i). The method is applied to entry trajectory control allocation for a reusable launch vehicle (X-33).
Real-time robot deliberation by compilation and monitoring of anytime algorithms
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo
1994-01-01
Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.
Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei
2017-12-01
As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.
Computing the Envelope for Stepwise-Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2002-01-01
Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.
Dynamically allocating sets of fine-grained processors to running computations
NASA Technical Reports Server (NTRS)
Middleton, David
1988-01-01
Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.
13 CFR 107.1520 - How a Licensee computes and allocates Prioritized Payments to SBA.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false How a Licensee computes and allocates Prioritized Payments to SBA. 107.1520 Section 107.1520 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees...
NASA Astrophysics Data System (ADS)
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung
2017-01-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617
EPA-SUPPORTED (ENVIRONMENTAL PROTECTION AGENCY-SUPPORTED) WASTELOAD ALLOCATION MODELS
Modeling is increasingly becoming part of the Wasteload Allocation Process. The U.S. EPA provides guidance, technical training and computer software in support of this program. This paper reviews the support available to modelers through the Wasteload Allocation Section of EPA's ...
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-10
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.
Evaluation of power system security and development of transmission pricing method
NASA Astrophysics Data System (ADS)
Kim, Hyungchul
The electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed (1) The description of probabilistic methods for power system security assessment; (2) The computation time of simulation methods; (3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
49 CFR 19.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... authorized representatives, have the right of timely and unrestricted access to any books, documents, papers... allocation plans, and any similar accounting computations of the rate at which a particular group of costs is... supporting records starts at the end of the fiscal year (or other accounting period) covered by the proposal...
34 CFR 74.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the right of timely and unrestricted access to any books, documents, papers, or other records of... allocation plans; and any similar accounting computations of the rate at which a particular group of costs is... starts at the end of the fiscal year (or other accounting period) covered by the proposal, plan, or other...
1986-06-11
been specified, then the amount specified is returned. Otherwise the current amount allocated is returned. T’STORAGESIZE for task types or objects is...hrs DURATION’LAST 131071.99993896484375 36 hrs F.A Address Clauses Address clauses are implemented for objects. No storage is allocated for objects...it is ignored. at Allocation . An integer in the range 1..2,147,483,647. For CONTIGUOUS files, it specifies the number of 256 byte sectors. For ITAM
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.
2010-01-01
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190
Business School Computer Usage, Fourth Annual UCLA Survey.
ERIC Educational Resources Information Center
Frand, Jason L.; And Others
The changing nature of the business school computing environment is monitored in a report whose purpose is to provide deans and other policy-makers with information to use in making allocation decisions and program plans. This survey focuses on resource allocations of 249 accredited U.S. business schools and 15 Canadian schools. A total of 128…
TASK ALLOCATION IN GEO-DISTRIBUTED CYBER-PHYSICAL SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aggarwal, Rachit; Smidts, Carol
This paper studies the task allocation algorithm for a distributed test facility (DTF), which aims to assemble geo-distributed cyber (software) and physical (hardware in the loop components into a prototype cyber-physical system (CPS). This allows low cost testing on an early conceptual prototype (ECP) of the ultimate CPS (UCPS) to be developed. The DTF provides an instrumentation interface for carrying out reliability experiments remotely such as fault propagation analysis and in-situ testing of hardware and software components in a simulated environment. Unfortunately, the geo-distribution introduces an overhead that is not inherent to the UCPS, i.e. a significant time delay inmore » communication that threatens the stability of the ECP and is not an appropriate representation of the behavior of the UCPS. This can be mitigated by implementing a task allocation algorithm to find a suitable configuration and assign the software components to appropriate computational locations, dynamically. This would allow the ECP to operate more efficiently with less probability of being unstable due to the delays introduced by geo-distribution. The task allocation algorithm proposed in this work uses a Monte Carlo approach along with Dynamic Programming to identify the optimal network configuration to keep the time delays to a minimum.« less
Real-time WAMI streaming target tracking in fog
NASA Astrophysics Data System (ADS)
Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe
2016-05-01
Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.
LOTUS 1-2-3 and Decision Support: Allocating the Monograph Budget.
ERIC Educational Resources Information Center
Perry-Holmes, Claudia
1985-01-01
Describes the use of electronic spreadsheet software for library decision support systems using personal computers. Discussion covers templates, formulas for allocating the materials budget, LOTUS 1-2-3 and budget allocations, choosing a formula, the spreadsheet itself, graphing capabilities, and advantages and disadvantages of templates. Six…
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.
VENI, video, VICI: The merging of computer and video technologies
NASA Technical Reports Server (NTRS)
Horowitz, Jay G.
1993-01-01
The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.
Internode data communications in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-03
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Internode data communications in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Trading a Problem-solving Task
NASA Astrophysics Data System (ADS)
Matsubara, Shigeo
This paper focuses on a task allocation problem, especially cases where the task is to find a solution in a search problem or a constraint satisfaction problem. If the search problem is hard to solve, a contractor may fail to find a solution. Here, the more computational resources such as the CPU time the contractor invests in solving the search problem, the more a solution is likely to be found. This brings about a new problem that a contractee has to find an appropriate level of the quality in a task achievement as well as to find an efficient allocation of a task among contractors. For example, if the contractee asks the contractor to find a solution with certainty, the payment from the contractee to the contractor may exceed the contractee's benefit from obtaining a solution, which discourages the contractee from trading a task. However, solving this problem is difficult because the contractee cannot ascertain the contractor's problem-solving ability such as the amount of available resources and knowledge (e.g. algorithms, heuristics) or monitor what amount of resources are actually invested in solving the allocated task. To solve this problem, we propose a task allocation mechanism that is able to choose an appropriate level of the quality in a task achievement and prove that this mechanism guarantees that each contractor reveals its true information. Moreover, we show that our mechanism can increase the contractee's utility compared with a simple auction mechanism by using computer simulation.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul
An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement ofmore » all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... impacted and distressed areas, HUD computes allocations based on the best available data that cover all the eligible affected areas. This Notice allocates funds based on unmet housing and economic revitalization... date of this Notice. Based on a review of the impacts from Hurricane Sandy, and estimates of unmet need...
Comptational Design Of Functional CA-S-H and Oxide Doped Alloy Systems
NASA Astrophysics Data System (ADS)
Yang, Shizhong; Chilla, Lokeshwar; Yang, Yan; Li, Kuo; Wicker, Scott; Zhao, Guang-Lin; Khosravi, Ebrahim; Bai, Shuju; Zhang, Boliang; Guo, Shengmin
Computer aided functional materials design accelerates the discovery of novel materials. This presentation will cover our recent research advance on the Ca-S-H system properties prediction and oxide doped high entropy alloy property simulation and experiment validation. Several recent developed computational materials design methods were utilized to the two systems physical and chemical properties prediction. A comparison of simulation results to the corresponding experiment data will be introduced. This research is partially supported by NSF CIMM project (OIA-15410795 and the Louisiana BoR), NSF HBCU Supplement climate change and ecosystem sustainability subproject 3, and LONI high performance computing time allocation loni mat bio7.
Allocating time to future tasks: the effect of task segmentation on planning fallacy bias.
Forsyth, Darryl K; Burt, Christopher D B
2008-06-01
The scheduling component of the time management process was used as a "paradigm" to investigate the allocation of time to future tasks. In three experiments, we compared task time allocation for a single task with the summed time allocations given for each subtask that made up the single task. In all three, we found that allocated time for a single task was significantly smaller than the summed time allocated to the individual subtasks. We refer to this as the segmentation effect. In Experiment 3, we asked participants to give estimates by placing a mark on a time line, and found that giving time allocations in the form of rounded close approximations probably does not account for the segmentation effect. We discuss the results in relation to the basic processes used to allocate time to future tasks and the means by which planning fallacy bias might be reduced.
Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.
2009-09-09
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.
Performance Analysis, Modeling and Scaling of HPC Applications and Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav
2016-01-13
E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less
Wayne Tlusty
1979-01-01
The concept of Visual Absorption Capability (VAC) is widely used by Forest Service Landscape Architects. The use of computer generated graphics can aid in combining times an area is seen, distance from observer and land aspect relative viewer; to determine visual magnitude. Perspective Plot allows both fast and inexpensive graphic analysis of VAC allocations, for...
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Allocation and User Account Policies | High-Performance Computing | NREL
Allocation and User Account Policies Allocation and User Account Policies For using NREL's high . When a project enters the Ended state: User access to project data is disabled. We reserve the right to agreements expire one (1) year after they become active or when the contractual arrangement between the
Methodical and technological aspects of creation of interactive computer learning systems
NASA Astrophysics Data System (ADS)
Vishtak, N. M.; Frolov, D. A.
2017-01-01
The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.
NASA Astrophysics Data System (ADS)
Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.
2017-05-01
In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
Newton, Amanda S; Dow, Nadia; Dong, Kathryn; Fitzpatrick, Eleanor; Cameron Wild, T; Johnson, David W; Ali, Samina; Colman, Ian; Rosychuk, Rhonda J
2017-08-11
This study piloted procedures and obtained data on intervention acceptability to determine the feasibility of a definitive randomised controlled trial (RCT) of the effectiveness of a computer-based brief intervention in the emergency department (ED). Two-arm, multi-site, pilot RCT. Adolescents aged 12-17 years presenting to three Canadian pediatric EDs from July 2010 to January 2013 for an alcohol-related complaint. Standard medical care plus computer-based screening and personalised assessment feedback (experimental group) or standard care plus computer-based sham (control group). ED and research staff, and adolescents were blinded to allocation. Main: change in alcohol consumption from baseline to 1- and 3 months post-intervention. Secondary: recruitment and retention rates, intervention acceptability and feasibility, perception of group allocation among ED and research staff, and change in health and social services utilisation. Of the 340 adolescents screened, 117 adolescents were eligible and 44 participated in the study (37.6% recruitment rate). Adolescents allocated to the intervention found it easy, quick and informative, but were divided on the credibility of the feedback provided (agreed it was credible: 44.4%, disagreed: 16.7%, unsure: 16.7%, no response: 22.2%). We found no evidence of a statistically significant relationship between which interventions adolescents were allocated to and which interventions staff thought they received. Alcohol consumption, and health and social services data were largely incomplete due to modest study retention rates of 47.7% and 40.9% at 1- and 3 months post-intervention, respectively. A computer-based intervention was acceptable to adolescents and delivery was feasible in the ED in terms of time to use and ease of use. However, adjustments are needed to the intervention to improve its credibility. A definitive RCT will be feasible if protocol adjustments are made to improve recruitment and retention rates; and increase the number of study sites and research staff. clinicaltrials.gov NCT01146665. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Work-family conflict and self-discrepant time allocation at work.
Dahm, Patricia C; Glomb, Theresa M; Manchester, Colleen Flaherty; Leroy, Sophie
2015-05-01
We examine the relationships between work-to-family conflict, time allocation across work activities, and the outcomes of work satisfaction, well-being, and salary in the context of self-regulation and self-discrepancy theories. We posit work-to-family conflict is associated with self-discrepant time allocation such that employees with higher levels of work-to-family conflict are likely to allocate less time than preferred to work activities that require greater self-regulatory resources (e.g., tasks that are complex, or those with longer term goals that delay rewards and closure) and allocate more time than preferred to activities that demand fewer self-regulatory resources or are replenishing (e.g., those that provide closure or are prosocial). We suggest this self-discrepant time allocation (actual vs. preferred time allocation) is one mechanism by which work-to-family conflict leads to negative employee consequences (Allen, Herst, Bruck, & Sutton, 2000; Mesmer-Magnus & Viswesvaran, 2005). Using polynomial regression and response surface methodology, we find that discrepancies between actual and preferred time allocations to work activities negatively relate to work satisfaction, psychological well-being, and physical well-being. Self-discrepant time allocation mediates the relationship between work-to-family conflict and work satisfaction and well-being, while actual time allocation (rather than the discrepancy) mediates the relationship between work-to-family conflict and salary. We find that women are more likely than men to report self-discrepant time allocations as work-to-family conflict increases. (c) 2015 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
Planning Complex Projects Automatically
NASA Technical Reports Server (NTRS)
Henke, Andrea L.; Stottler, Richard H.; Maher, Timothy P.
1995-01-01
Automated Manifest Planner (AMP) computer program applies combination of artificial-intelligence techniques to assist both expert and novice planners, reducing planning time by orders of magnitude. Gives planners flexibility to modify plans and constraints easily, without need for programming expertise. Developed specifically for planning space shuttle missions 5 to 10 years ahead, with modifications, applicable in general to planning other complex projects requiring scheduling of activities depending on other activities and/or timely allocation of resources. Adaptable to variety of complex scheduling problems in manufacturing, transportation, business, architecture, and construction.
Activity-based costing: a practical model for cost calculation in radiotherapy.
Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien
2003-10-01
The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann C.; Brandt, James M.; Tucker, Thomas
2011-09-01
This report provides documentation for the completion of the Sandia Level II milestone 'Develop feedback system for intelligent dynamic resource allocation to improve application performance'. This milestone demonstrates the use of a scalable data collection analysis and feedback system that enables insight into how an application is utilizing the hardware resources of a high performance computing (HPC) platform in a lightweight fashion. Further we demonstrate utilizing the same mechanisms used for transporting data for remote analysis and visualization to provide low latency run-time feedback to applications. The ultimate goal of this body of work is performance optimization in the facemore » of the ever increasing size and complexity of HPC systems.« less
Distributed Multiple Access Control for the Wireless Mesh Personal Area Networks
NASA Astrophysics Data System (ADS)
Park, Moo Sung; Lee, Byungjoo; Rhee, Seung Hyong
Mesh networking technologies for both high-rate and low-rate wireless personal area networks (WPANs) are under development by several standardization bodies. They are considering to adopt distributed TDMA MAC protocols to provide seamless user mobility as well as a good peer-to-peer QoS in WPAN mesh. It has been, however, pointed out that the absence of a central controller in the wireless TDMA MAC may cause a severe performance degradation: e. g., fair allocation, service differentiation, and admission control may be hard to achieve or can not be provided. In this paper, we suggest a new framework of resource allocation for the distributed MAC protocols in WPANs. Simulation results show that our algorithm achieves both a fair resource allocation and flexible service differentiations in a fully distributed way for mesh WPANs where the devices have high mobility and various requirements. We also provide an analytical modeling to discuss about its unique equilibrium and to compute the lengths of reserved time slots at the stable point.
NASA Technical Reports Server (NTRS)
1971-01-01
The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Method for wiring allocation and switch configuration in a multiprocessor environment
Aridor, Yariv [Zichron Ya'akov, IL; Domany, Tamar [Kiryat Tivon, IL; Frachtenberg, Eitan [Jerusalem, IL; Gal, Yoav [Haifa, IL; Shmueli, Edi [Haifa, IL; Stockmeyer, legal representative, Robert E.; Stockmeyer, Larry Joseph [San Jose, CA
2008-07-15
A method for wiring allocation and switch configuration in a multiprocessor computer, the method including employing depth-first tree traversal to determine a plurality of paths among a plurality of processing elements allocated to a job along a plurality of switches and wires in a plurality of D-lines, and selecting one of the paths in accordance with at least one selection criterion.
Parallel simulation of tsunami inundation on a large-scale supercomputer
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.
Short-term Temperature Prediction Using Adaptive Computing on Dynamic Scales
NASA Astrophysics Data System (ADS)
Hu, W.; Cervone, G.; Jha, S.; Balasubramanian, V.; Turilli, M.
2017-12-01
When predicting temperature, there are specific places and times when high accuracy predictions are harder. For example, not all the sub-regions in the domain require the same amount of computing resources to generate an accurate prediction. Plateau areas might require less computing resources than mountainous areas because of the steeper gradient of temperature change in the latter. However, it is difficult to estimate beforehand the optimal allocation of computational resources because several parameters play a role in determining the accuracy of the forecasts, in addition to orography. The allocation of resources to perform simulations can become a bottleneck because it requires human intervention to stop jobs or start new ones. The goal of this project is to design and develop a dynamic approach to generate short-term temperature predictions that can automatically determines the required computing resources and the geographic scales of the predictions based on the spatial and temporal uncertainties. The predictions and the prediction quality metrics are computed using a numeric weather prediction model, Analog Ensemble (AnEn), and the parallelization on high performance computing systems is accomplished using Ensemble Toolkit, one component of the RADICAL-Cybertools family of tools. RADICAL-Cybertools decouple the science needs from the computational capabilities by building an intermediate layer to run general ensemble patterns, regardless of the science. In this research, we show how the ensemble toolkit allows generating high resolution temperature forecasts at different spatial and temporal resolution. The AnEn algorithm is run using NAM analysis and forecasts data for the continental United States for a period of 2 years. AnEn results show that temperature forecasts perform well according to different probabilistic and deterministic statistical tests.
Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew F.; Ananthan, Shreyas; Churchfield, Matt
This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energymore » Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.« less
Adaptive Management of Computing and Network Resources for Spacecraft Systems
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)
2000-01-01
It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-10
... award under this Notice. HUD computes allocations based on data that are generally available and that... Grand Forks, North Dakota, provided a very affordable soft-second loan as an incentive to help induce...
Kangas, Brian D; Berry, Meredith S; Cassidy, Rachel N; Dallery, Jesse; Vaidya, Manish; Hackenberg, Timothy D
2009-10-01
Adult human subjects engaged in a simulated Rock/Paper/Scissors game against a computer opponent. The computer opponent's responses were determined by programmed probabilities that differed across 10 blocks of 100 trials each. Response allocation in Experiment 1 was well described by a modified version of the generalized matching equation, with undermatching observed in all subjects. To assess the effects of instructions on response allocation, accurate probability-related information on how the computer was programmed to respond was provided to subjects in Experiment 2. Five of 6 subjects played the counter response of the computer's dominant programmed response near-exclusively (e.g., subjects played paper almost exclusively if the probability of rock was high), resulting in minor overmatching, and higher reinforcement rates relative to Experiment 1. On the whole, the study shows that the generalized matching law provides a good description of complex human choice in a gaming context, and illustrates a promising set of laboratory methods and analytic techniques that capture important features of human choice outside the laboratory.
Allocating Study Time Appropriately: Spontaneous and Instructed Performance.
ERIC Educational Resources Information Center
Dufresne, Annette; And Others
Two aspects of allocation of study time were examined among 48 third- and 48 fifth-grade children. Aspects examined were: (1) allocation of more time to more difficult material; and (2) allocation of sufficient time to meet a recall goal. Under a self-terminated procedure, children studied two booklets, one of which consisted of easy or highly…
Using Queue Time Predictions for Processor Allocation
1997-01-01
Diego Supercomputer Center, 1996. 19 [15] Vijay K. Naik, Sanjeev K. Setia , and Mark S. Squillante. Performance analysis of job schedul- ing policies in...Processing, pages 101{111, 1995. [19] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor partitioning policies for parallel...computers. Technical Report CS-TR-2684, University of Maryland, May 1991. [20] Sanjeev K. Setia and Satish K. Tripathi. A comparative analysis of static
Software for Allocating Resources in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester; Zendejas, Silvino; Baldwin, John
2003-01-01
TIGRAS 2.0 is a computer program designed to satisfy a need for improved means for analyzing the tracking demands of interplanetary space-flight missions upon the set of ground antenna resources of the Deep Space Network (DSN) and for allocating those resources. Written in Microsoft Visual C++, TIGRAS 2.0 provides a single rich graphical analysis environment for use by diverse DSN personnel, by connecting to various data sources (relational databases or files) based on the stages of the analyses being performed. Notable among the algorithms implemented by TIGRAS 2.0 are a DSN antenna-load-forecasting algorithm and a conflict-aware DSN schedule-generating algorithm. Computers running TIGRAS 2.0 can also be connected using SOAP/XML to a Web services server that provides analysis services via the World Wide Web. TIGRAS 2.0 supports multiple windows and multiple panes in each window for users to view and use information, all in the same environment, to eliminate repeated switching among various application programs and Web pages. TIGRAS 2.0 enables the use of multiple windows for various requirements, trajectory-based time intervals during which spacecraft are viewable, ground resources, forecasts, and schedules. Each window includes a time navigation pane, a selection pane, a graphical display pane, a list pane, and a statistics pane.
2016-04-01
the other elements are allocated . PM/CM/FM and OUS hours are computed for each workcenter by applying productivity allowances and make-ready/put...BA trends Source: TFMMS. During the transitions (GENDET to rated and rated to PACT), SMEs worked together to allocate GENDET BA to ratings and...to allocate BA. SMEs, drawn from the Enlisted Community Managers, NAVMAC, PERs 4010, and N13, used empirical data from SME experience to select
2016-04-01
hours are dedicated to watch stations. The remaining 14 hours are where the other elements are allocated . PM/CM/FM and OUS hours are computed for each...PACT), SMEs worked together to allocate GENDET BA to ratings and then to PACT requirements. They also identified PACT-in ratings and quotas.7...ultimate rating assignments, they used mostly qualitative methods to allocate BA. SMEs, drawn from the Enlisted Community Managers, NAVMAC, PERs 4010
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
ERIC Educational Resources Information Center
Erdogan, Yavuz
2009-01-01
The purpose of this paper is to compare the effects of paper-based and computer-based concept mappings on computer hardware achievement, computer anxiety and computer attitude of the eight grade secondary school students. The students were randomly allocated to three groups and were given instruction on computer hardware. The teaching methods used…
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Adaptive function allocation reduces performance costs of static automation
NASA Technical Reports Server (NTRS)
Parasuraman, Raja; Mouloua, Mustapha; Molloy, Robert; Hilburn, Brian
1993-01-01
Adaptive automation offers the option of flexible function allocation between the pilot and on-board computer systems. One of the important claims for the superiority of adaptive over static automation is that such systems do not suffer from some of the drawbacks associated with conventional function allocation. Several experiments designed to test this claim are reported in this article. The efficacy of adaptive function allocation was examined using a laboratory flight-simulation task involving multiple functions of tracking, fuel-management, and systems monitoring. The results show that monitoring inefficiency represents one of the performance costs of static automation. Adaptive function allocation can reduce the performance cost associated with long-term static automation.
Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.
2015-12-01
Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.
Scheduling for anesthesia at geographic locations remote from the operating room.
Dexter, Franklin; Wachtel, Ruth E
2014-08-01
Providing general anesthesia at locations away from the operating room, called remote locations, poses many medical and scheduling challenges. This review discusses how to schedule procedures at remote locations to maximize anesthesia productivity (see Video, Supplemental Digital Content 1). Anesthesia labour productivity can be maximized by assigning one or more 8-h or 10-h periods of allocated time every 2 weeks dedicated specifically to each remote specialty that has enough cases to fill those periods. Remote specialties can then schedule their cases themselves into their own allocated time. Periods of allocated time (called open, unblocked or first come first served time) can be used by remote locations that do not have their own allocated time. Unless cases are scheduled sequentially into allocated time, there will be substantial extra underutilized time (time during which procedures are not being performed and personnel sit idle even though staffing has been planned) and a concomitant reduction in percent productivity. Allocated time should be calculated on the basis of usage. Remote locations with sufficient hours of cases should be allocated time reserved especially for them in which to schedule their cases, with a maximum waiting time of 2 weeks, to achieve an average wait of 1 week.
Research on allocation efficiency of the daisy chain allocation algorithm
NASA Astrophysics Data System (ADS)
Shi, Jingping; Zhang, Weiguo
2013-03-01
With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.
Children's self-allocation and use of classroom curricular time.
Ingram, J; Worrall, N
1992-02-01
A class of 9-10 year-olds (N = 12) in a British primary school were observed as it moved over a one-year period through three types of classroom environment, traditional directive, transitional negotiative and established negotiative. Each environment offered the children a differing relationship with curricular time, its control and allocation, moving from teacher-allocated time to child allocation. Pupil self-report and classroom observation indicated differences in the balance of curricular spread and allocated time on curricular subject in relation to the type of classroom organisation and who controlled classroom time. These differences were at both class and individual child level. The established negotiative environment recorded the most equitable curricular balance, traditional directive the least. While individual children responded differently within and across the three classroom environments, the established negotiative where time was under child control recorded preference for longer activity periods compared to where the teacher controlled time allocations.
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
Design of safety-oriented control allocation strategies for overactuated electric vehicles
NASA Astrophysics Data System (ADS)
de Castro, Ricardo; Tanelli, Mara; Esteves Araújo, Rui; Savaresi, Sergio M.
2014-08-01
The new vehicle platforms for electric vehicles (EVs) that are becoming available are characterised by actuator redundancy, which makes it possible to jointly optimise different aspects of the vehicle motion. To do this, high-level control objectives are first specified and solved with appropriate control strategies. Then, the resulting virtual control action must be translated into actual actuator commands by a control allocation layer that takes care of computing the forces to be applied at the wheels. This step, in general, is quite demanding as far as computational complexity is considered. In this work, a safety-oriented approach to this problem is proposed. Specifically, a four-wheel steer EV with four in-wheel motors is considered, and the high-level motion controller is designed within a sliding mode framework with conditional integrators. For distributing the forces among the tyres, two control allocation approaches are investigated. The first, based on the extension of the cascading generalised inverse method, is computationally efficient but shows some limitations in dealing with unfeasible force values. To solve the problem, a second allocation algorithm is proposed, which relies on the linearisation of the tyre-road friction constraints. Extensive tests, carried out in the CarSim simulation environment, demonstrate the effectiveness of the proposed approach.
Master Software Requirements Specification
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2003-01-01
A basic function of a computational grid such as the NASA Information Power Grid (IPG) is to allow users to execute applications on remote computer systems. The Globus Resource Allocation Manager (GRAM) provides this functionality in the IPG and many other grids at this time. While the functionality provided by GRAM clients is adequate, GRAM does not support useful features such as staging several sets of files, running more than one executable in a single job submission, and maintaining historical information about execution operations. This specification is intended to provide the environmental and software functional requirements for the IPG Job Manager V2.0 being developed by AMTI for NASA.
Code of Federal Regulations, 2010 CFR
2010-04-01
... allocated to all Bureau operated and contract schools based on the number of square feet of floor space used... quarters shall be specifically excluded from the computation. (b) Square footage figures used in... Facilities Engineering. (c) In those cases, such as contract schools, where square footage figures are not...
Code of Federal Regulations, 2013 CFR
2013-04-01
... allocated to all Bureau operated and contract schools based on the number of square feet of floor space used... quarters shall be specifically excluded from the computation. (b) Square footage figures used in... Facilities Engineering. (c) In those cases, such as contract schools, where square footage figures are not...
Code of Federal Regulations, 2014 CFR
2014-04-01
... allocated to all Bureau operated and contract schools based on the number of square feet of floor space used... quarters shall be specifically excluded from the computation. (b) Square footage figures used in... Facilities Engineering. (c) In those cases, such as contract schools, where square footage figures are not...
Code of Federal Regulations, 2011 CFR
2011-04-01
... allocated to all Bureau operated and contract schools based on the number of square feet of floor space used... quarters shall be specifically excluded from the computation. (b) Square footage figures used in... Facilities Engineering. (c) In those cases, such as contract schools, where square footage figures are not...
Code of Federal Regulations, 2012 CFR
2012-04-01
... allocated to all Bureau operated and contract schools based on the number of square feet of floor space used... quarters shall be specifically excluded from the computation. (b) Square footage figures used in... Facilities Engineering. (c) In those cases, such as contract schools, where square footage figures are not...
Anticipation and Choice Heuristics in the Dynamic Consumption of Pain Relief
Story, Giles W.; Vlaev, Ivo; Dayan, Peter; Seymour, Ben; Darzi, Ara; Dolan, Raymond J.
2015-01-01
Humans frequently need to allocate resources across multiple time-steps. Economic theory proposes that subjects do so according to a stable set of intertemporal preferences, but the computational demands of such decisions encourage the use of formally less competent heuristics. Few empirical studies have examined dynamic resource allocation decisions systematically. Here we conducted an experiment involving the dynamic consumption over approximately 15 minutes of a limited budget of relief from moderately painful stimuli. We had previously elicited the participants’ time preferences for the same painful stimuli in one-off choices, allowing us to assess self-consistency. Participants exhibited three characteristic behaviors: saving relief until the end, spreading relief across time, and early spending, of which the last was markedly less prominent. The likelihood that behavior was heuristic rather than normative is suggested by the weak correspondence between one-off and dynamic choices. We show that the consumption choices are consistent with a combination of simple heuristics involving early-spending, spreading or saving of relief until the end, with subjects predominantly exhibiting the last two. PMID:25793302
Anticipation and choice heuristics in the dynamic consumption of pain relief.
Story, Giles W; Vlaev, Ivo; Dayan, Peter; Seymour, Ben; Darzi, Ara; Dolan, Raymond J
2015-03-01
Humans frequently need to allocate resources across multiple time-steps. Economic theory proposes that subjects do so according to a stable set of intertemporal preferences, but the computational demands of such decisions encourage the use of formally less competent heuristics. Few empirical studies have examined dynamic resource allocation decisions systematically. Here we conducted an experiment involving the dynamic consumption over approximately 15 minutes of a limited budget of relief from moderately painful stimuli. We had previously elicited the participants' time preferences for the same painful stimuli in one-off choices, allowing us to assess self-consistency. Participants exhibited three characteristic behaviors: saving relief until the end, spreading relief across time, and early spending, of which the last was markedly less prominent. The likelihood that behavior was heuristic rather than normative is suggested by the weak correspondence between one-off and dynamic choices. We show that the consumption choices are consistent with a combination of simple heuristics involving early-spending, spreading or saving of relief until the end, with subjects predominantly exhibiting the last two.
Concurrent schedules: Effects of time- and response-allocation constraints
Davison, Michael
1991-01-01
Five pigeons were trained on concurrent variable-interval schedules arranged on two keys. In Part 1 of the experiment, the subjects responded under no constraints, and the ratios of reinforcers obtainable were varied over five levels. In Part 2, the conditions of the experiment were changed such that the time spent responding on the left key before a subsequent changeover to the right key determined the minimum time that must be spent responding on the right key before a changeover to the left key could occur. When the left key provided a higher reinforcer rate than the right key, this procedure ensured that the time allocated to the two keys was approximately equal. The data showed that such a time-allocation constraint only marginally constrained response allocation. In Part 3, the numbers of responses emitted on the left key before a changeover to the right key determined the minimum number of responses that had to be emitted on the right key before a changeover to the left key could occur. This response constraint completely constrained time allocation. These data are consistent with the view that response allocation is a fundamental process (and time allocation a derivative process), or that response and time allocation are independently controlled, in concurrent-schedule performance. PMID:16812632
Oscillatory brain dynamics associated with the automatic processing of emotion in words.
Wang, Lin; Bastiaansen, Marcel
2014-10-01
This study examines the automaticity of processing the emotional aspects of words, and characterizes the oscillatory brain dynamics that accompany this automatic processing. Participants read emotionally negative, neutral and positive nouns while performing a color detection task in which only perceptual-level analysis was required. Event-related potentials and time frequency representations were computed from the concurrently measured EEG. Negative words elicited a larger P2 and a larger late positivity than positive and neutral words, indicating deeper semantic/evaluative processing of negative words. In addition, sustained alpha power suppressions were found for the emotional compared to neutral words, in the time range from 500 to 1000ms post-stimulus. These results suggest that sustained attention was allocated to the emotional words, whereas the attention allocated to the neutral words was released after an initial analysis. This seems to hold even when the emotional content of the words is task-irrelevant. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki
2016-12-01
In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.
A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF
NASA Astrophysics Data System (ADS)
Deatrich, D. C.; Liu, S. X.; Tafirout, R.
2010-04-01
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.
Constructing Neuronal Network Models in Massively Parallel Environments.
Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
Constructing Neuronal Network Models in Massively Parallel Environments
Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808
Shizgal, Peter
2012-01-01
Almost 80 years ago, Lionel Robbins proposed a highly influential definition of the subject matter of economics: the allocation of scarce means that have alternative ends. Robbins confined his definition to human behavior, and he strove to separate economics from the natural sciences in general and from psychology in particular. Nonetheless, I extend his definition to the behavior of non-human animals, rooting my account in psychological processes and their neural underpinnings. Some historical developments are reviewed that render such a view more plausible today than would have been the case in Robbins' time. To illustrate a neuroeconomic perspective on decision making in non-human animals, I discuss research on the rewarding effect of electrical brain stimulation. Central to this discussion is an empirically based, functional/computational model of how the subjective intensity of the electrical reward is computed and combined with subjective costs so as to determine the allocation of time to the pursuit of reward. Some successes achieved by applying the model are discussed, along with limitations, and evidence is presented regarding the roles played by several different neural populations in processes posited by the model. I present a rationale for marshaling convergent experimental methods to ground psychological and computational processes in the activity of identified neural populations, and I discuss the strengths, weaknesses, and complementarity of the individual approaches. I then sketch some recent developments that hold great promise for advancing our understanding of structure-function relationships in neuroscience in general and in the neuroeconomic study of decision making in particular.
Shizgal, Peter
2011-01-01
Almost 80 years ago, Lionel Robbins proposed a highly influential definition of the subject matter of economics: the allocation of scarce means that have alternative ends. Robbins confined his definition to human behavior, and he strove to separate economics from the natural sciences in general and from psychology in particular. Nonetheless, I extend his definition to the behavior of non-human animals, rooting my account in psychological processes and their neural underpinnings. Some historical developments are reviewed that render such a view more plausible today than would have been the case in Robbins’ time. To illustrate a neuroeconomic perspective on decision making in non-human animals, I discuss research on the rewarding effect of electrical brain stimulation. Central to this discussion is an empirically based, functional/computational model of how the subjective intensity of the electrical reward is computed and combined with subjective costs so as to determine the allocation of time to the pursuit of reward. Some successes achieved by applying the model are discussed, along with limitations, and evidence is presented regarding the roles played by several different neural populations in processes posited by the model. I present a rationale for marshaling convergent experimental methods to ground psychological and computational processes in the activity of identified neural populations, and I discuss the strengths, weaknesses, and complementarity of the individual approaches. I then sketch some recent developments that hold great promise for advancing our understanding of structure–function relationships in neuroscience in general and in the neuroeconomic study of decision making in particular. PMID:22363253
Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.
Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin
2014-10-01
High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.
Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems
Andrade, G.; Ferreira, R.; Teodoro, George; Rocha, Leonardo; Saltz, Joel H.; Kurc, Tahsin
2015-01-01
High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales. PMID:26640423
NASA Technical Reports Server (NTRS)
Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.
1975-01-01
Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.
Optimal resource allocation for defense of targets based on differing measures of attractiveness.
Bier, Vicki M; Haphuriwat, Naraphorn; Menoyo, Jaime; Zimmerman, Rae; Culpen, Alison M
2008-06-01
This article describes the results of applying a rigorous computational model to the problem of the optimal defensive resource allocation among potential terrorist targets. In particular, our study explores how the optimal budget allocation depends on the cost effectiveness of security investments, the defender's valuations of the various targets, and the extent of the defender's uncertainty about the attacker's target valuations. We use expected property damage, expected fatalities, and two metrics of critical infrastructure (airports and bridges) as our measures of target attractiveness. Our results show that the cost effectiveness of security investment has a large impact on the optimal budget allocation. Also, different measures of target attractiveness yield different optimal budget allocations, emphasizing the importance of developing more realistic terrorist objective functions for use in budget allocation decisions for homeland security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macevicz, S.C.
1979-05-09
This thesis attempts to explain the evolution of certain features of social insect colony population structure by the use of optimization models. Two areas are examined in detail. First, the optimal reproductive strategies of annual eusocial insects are considered. A model is constructed for the growth of workers and reproductives as a function of the resources allocated to each. Next the allocation schedule is computed which yields the maximum number of reproductives by season's end. The results indicate that if there is constant return to scale for allocated resources the optimal strategy is to invest in colony growth until approximatelymore » one generation before season's end, whereupon worker production ceases and reproductive effort is switched entirely to producing queens and males. Furthermore, the results indicate that if there is decreasing return to scale for allocated resources then simultaneous production of workers and reproductives is possible. The model is used to explain the colony demography of two species of wasp, Polistes fuscatus and Vespa orientalis. Colonies of these insects undergo a sudden switch from the production of workers to the production of reproductives. The second area examined concerns optimal forager size distributions for monomorphic ant colonies. A model is constructed that describes the colony's energetic profit as a function which depends on the size distribution of food resources as well as forager efficiency, metabolic costs, and manufacturing costs.« less
Sendi, Pedram; Brouwer, Werner B F; Bucher, Heiner C; Weber, Rainer; Battegay, Manuel
2007-06-01
Time is a limited resource and individuals have to decide how many hours they should allocate to work and to leisure activities. Differences in wage rate or availability of non-labour income (financial support from families and savings) may influence how individuals allocate their time between work and leisure. An increase in wage rate may induce income effects (leisure time demanded increases) and substitution effects (leisure time demanded decreases) whereas an increase in non-labour income only induces income effects. We explored the effects of differences in wage rate and non-labour income on the allocation of time in HIV-infected patients. Patients enrolled in the Swiss HIV Cohort Study (SHCS) provided information on their time allocation, i.e. number of hours worked in 1998. A multinomial logistic regression model was used to test for income and substitution effects. Our results indicate that (i) the allocation of time in HIV-infected patients does not differ with level of education (i.e., wage rate), and that (ii) availability of non-labour income induces income effects, i.e. individuals demand more leisure time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graves, Todd L; Hamada, Michael S
2008-01-01
Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by integrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty. Both of these goals are facilitated by modern Bayesian computational methods. We apply these tools to examples that were previously solvable only through the use of ingenious approximations, and use genetic algorithms to guide resource allocation.
NASA Technical Reports Server (NTRS)
Witkop, D. L.; Dale, B. J.; Gellin, S.
1991-01-01
The programming aspects of SFENES are described in the User's Manual. The information presented is provided for the installation programmer. It is sufficient to fully describe the general program logic and required peripheral storage. All element generated data is stored externally to reduce required memory allocation. A separate section is devoted to the description of these files thereby permitting the optimization of Input/Output (I/O) time through efficient buffer descriptions. Individual subroutine descriptions are presented along with the complete Fortran source listings. A short description of the major control, computation, and I/O phases is included to aid in obtaining an overall familiarity with the program's components. Finally, a discussion of the suggested overlay structure which allows the program to execute with a reasonable amount of memory allocation is presented.
NASA Astrophysics Data System (ADS)
Rahman, Imran; Vasant, Pandian M.; Singh, Balbir Singh Mahinder; Abdullah-Al-Wadud, M.
2014-10-01
Recent researches towards the use of green technologies to reduce pollution and increase penetration of renewable energy sources in the transportation sector are gaining popularity. The development of the smart grid environment focusing on PHEVs may also heal some of the prevailing grid problems by enabling the implementation of Vehicle-to-Grid (V2G) concept. Intelligent energy management is an important issue which has already drawn much attention to researchers. Most of these works require formulation of mathematical models which extensively use computational intelligence-based optimization techniques to solve many technical problems. Higher penetration of PHEVs require adequate charging infrastructure as well as smart charging strategies. We used Gravitational Search Algorithm (GSA) to intelligently allocate energy to the PHEVs considering constraints such as energy price, remaining battery capacity, and remaining charging time.
Efficient Resources Provisioning Based on Load Forecasting in Cloud
Hu, Rongdong; Jiang, Jingfei; Liu, Guangming; Wang, Lixin
2014-01-01
Cloud providers should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate resources in a fine-grained mode according to application's actual resources demand. The necessary precondition of this strategy is obtaining future load information in advance. We propose a multi-step-ahead load forecasting method, KSwSVR, based on statistical learning theory which is suitable for the complex and dynamic characteristics of the cloud computing environment. It integrates an improved support vector regression algorithm and Kalman smoother. Public trace data taken from multitypes of resources were used to verify its prediction accuracy, stability, and adaptability, comparing with AR, BPNN, and standard SVR. Subsequently, based on the predicted results, a simple and efficient strategy is proposed for resource provisioning. CPU allocation experiment indicated it can effectively reduce resources consumption while meeting service level agreements requirements. PMID:24701160
Advanced Technology System Scheduling Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ang, Jim; Carnes, Brian; Hoang, Thuc
In the fall of 2005, the Advanced Simulation and Computing (ASC) Program appointed a team to formulate a governance model for allocating resources and scheduling the stockpile stewardship workload on ASC capability systems. This update to the original document takes into account the new technical challenges and roles for advanced technology (AT) systems and the new ASC Program workload categories that must be supported. The goal of this updated model is to effectively allocate and schedule AT computing resources among all three National Nuclear Security Administration (NNSA) laboratories for weapons deliverables that merit priority on this class of resource. Themore » process outlined below describes how proposed work can be evaluated and approved for resource allocations while preserving high effective utilization of the systems. This approach will provide the broadest possible benefit to the Stockpile Stewardship Program (SSP).« less
Farmer, Andrew; Toms, Christy; Hardinge, Maxine; Williams, Veronika; Rutter, Heather; Tarassenko, Lionel
2014-01-08
The potential for telehealth-based interventions to provide remote support, education and improve self-management for long-term conditions is increasingly recognised. This trial aims to determine whether an intervention delivered through an easy-to-use tablet computer can improve the quality of life of patients with chronic obstructive pulmonary disease (COPD) by providing personalised self-management information and education. The EDGE (sElf management anD support proGrammE) for COPD is a multicentre, randomised controlled trial designed to assess the efficacy of an Internet-linked tablet computer-based intervention (the EDGE platform) in improving quality of life in patients with moderate to very severe COPD compared with usual care. Eligible patients are randomly allocated to receive the tablet computer-based intervention or usual care in a 2:1 ratio using a web-based randomisation system. Participants are recruited from respiratory outpatient clinics and pulmonary rehabilitation courses as well as from those recently discharged from hospital with a COPD-related admission and from primary care clinics. Participants allocated to the tablet computer-based intervention complete a daily symptom diary and record clinical symptoms using a Bluetooth-linked pulse oximeter. Participants allocated to receive usual care are provided with all the information given to those allocated to the intervention but without the use of the tablet computer or the facility to monitor their symptoms or physiological variables. The primary outcome of quality of life is measured using the St George's Respiratory Questionnaire for COPD patients (SGRQ-C) baseline, 6 and 12 months. Secondary outcome measures are recorded at these intervals in addition to 3 months. The Research Ethics Committee for Berkshire-South Central has provided ethical approval for the conduct of the study in the recruiting regions. The results of the study will be disseminated through peer review publications and conference presentations. Current controlled trials ISRCTN40367841.
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
NASA Astrophysics Data System (ADS)
Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos
2015-02-01
The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.
Data location-aware job scheduling in the grid. Application to the GridWay metascheduler
NASA Astrophysics Data System (ADS)
Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.
2010-04-01
Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.
A multi-GPU real-time dose simulation software framework for lung radiotherapy.
Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A
2012-09-01
Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas
2008-01-01
A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Akyuz, F. A.; Heer, E.
1972-01-01
This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.
Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud
Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil
2016-01-01
Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551
Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.
Florence, A Paulin; Shanthi, V; Simon, C B Sunil
2016-01-01
Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
Computer programs for degaussing, magnetic field calculation, low speed wing flap systems aerodynamics, structural panel analysis, dynamic stress/strain data acquisition, allocation and network scheduling, and digital filters are discussed.
TT : a program that implements predictor sort design and analysis
S. P. Verrill; D. W. Green; V. L. Herian
1997-01-01
In studies on wood strength, researchers sometimes replace experimental unit allocation via random sampling with allocation via sorts based on nondestructive measurements of strength predictors such as modulus of elasticity and specific gravity. This report documents TT, a computer program that implements recently published methods to increase the sensitivity of such...
Using Excel's Solver Function to Facilitate Reciprocal Service Department Cost Allocations
ERIC Educational Resources Information Center
Leese, Wallace R.
2013-01-01
The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated and theoretically incorrect direct or step-down methods. This article illustrates how Excel's Solver…
ERIC Educational Resources Information Center
Norris, Graeme, Ed.
Research progress by member institutions is reviewed with regard to university administration, computing, committees, libraries, and student welfare. Consideration is given to effectiveness and efficiency, management information, management by objectives, periodic review of objectives, strategy, and analytic resource allocation. Two research…
Using Excel's Matrix Operations to Facilitate Reciprocal Cost Allocations
ERIC Educational Resources Information Center
Leese, Wallace R.; Kizirian, Tim
2009-01-01
The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated direct or step-down methods. Here is a short example demonstrating how Excel's sometimes unknown matrix…
Ground data systems resource allocation process
NASA Technical Reports Server (NTRS)
Berner, Carol A.; Durham, Ralph; Reilly, Norman B.
1989-01-01
The Ground Data Systems Resource Allocation Process at the Jet Propulsion Laboratory provides medium- and long-range planning for the use of Deep Space Network and Mission Control and Computing Center resources in support of NASA's deep space missions and Earth-based science. Resources consist of radio antenna complexes and associated data processing and control computer networks. A semi-automated system was developed that allows operations personnel to interactively generate, edit, and revise allocation plans spanning periods of up to ten years (as opposed to only two or three weeks under the manual system) based on the relative merit of mission events. It also enhances scientific data return. A software system known as the Resource Allocation and Planning Helper (RALPH) merges the conventional methods of operations research, rule-based knowledge engineering, and advanced data base structures. RALPH employs a generic, highly modular architecture capable of solving a wide variety of scheduling and resource sequencing problems. The rule-based RALPH system has saved significant labor in resource allocation. Its successful use affirms the importance of establishing and applying event priorities based on scientific merit, and the benefit of continuity in planning provided by knowledge-based engineering. The RALPH system exhibits a strong potential for minimizing development cycles of resource and payload planning systems throughout NASA and the private sector.
Radeva, Tsvetomira; Dornhaus, Anna; Lynch, Nancy; Nagpal, Radhika; Su, Hsin-Hao
2017-12-01
Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a 'surplus' set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically.
Dornhaus, Anna; Su, Hsin-Hao
2017-01-01
Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a ‘surplus’ set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically. PMID:29240763
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
Social Computing and the Attention Economy
NASA Astrophysics Data System (ADS)
Huberman, Bernardo A.
2013-04-01
Social computing focuses on the interaction between social behavior and information, especially on how the latter propagates across social networks and is consumed and transformed in the process. At the same time the ubiquity of information has left it devoid of much monetary value. The scarce, and therefore valuable, resource is now attention, and its allocation gives rise to an attention economy that determines how content is consumed and propagated. Since two major factors involved in getting attention are novelty and popularity, we analyze the role that both play in attracting attention to web content and how to prioritize them in order to maximize it. We also demonstrate that the relative performance of strategies based on prioritizing either popularity or novelty exhibit an abrupt change around a critical value of the novelty decay time, resembling a phase transition.
Computer versus paper--does it make any difference in test performance?
Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin
2015-01-01
CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.
1976-01-01
An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.
High-speed prediction of crystal structures for organic molecules
NASA Astrophysics Data System (ADS)
Obata, Shigeaki; Goto, Hitoshi
2015-02-01
We developed a master-worker type parallel algorithm for allocating tasks of crystal structure optimizations to distributed compute nodes, in order to improve a performance of simulations for crystal structure predictions. The performance experiments were demonstrated on TUT-ADSIM supercomputer system (HITACHI HA8000-tc/HT210). The experimental results show that our parallel algorithm could achieve speed-ups of 214 and 179 times using 256 processor cores on crystal structure optimizations in predictions of crystal structures for 3-aza-bicyclo(3.3.1)nonane-2,4-dione and 2-diazo-3,5-cyclohexadiene-1-one, respectively. We expect that this parallel algorithm is always possible to reduce computational costs of any crystal structure predictions.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
The Effect of Student Time Allocation on Academic Achievement
ERIC Educational Resources Information Center
Grave, Barbara S.
2011-01-01
There is a large literature on the influence of institutional characteristics on student academic achievement. In contrast, relatively little research focusses on student time allocation and its effects on student performance. This paper contributes to the literature by investigating the effect of student time allocation on the average grade of…
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
If I Survey You Again Today, Will Still Love Me Tomorrow?
ERIC Educational Resources Information Center
Webster, Sarah P.
1989-01-01
Description of academic computing services at Syracuse University focuses on surveys of students and faculty that have identified hardware and software use, problems encountered, prior computer experience, and attitudes toward computers. Advances in microcomputers, word processing, and graphics are described; resource allocation is discussed; and…
45 CFR 402.31 - Determination of allocations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... included in the computation of its allocation for a fiscal year by adding to the sum of SLIAG-related costs..., pursuant to § 402.41(c) (1) and (2). For fiscal years 1993 and 1994, the Department will add to the amount... the event that a State has not submitted an approved report for a fiscal year, the Department will...
System Resource Allocation Requests | High-Performance Computing | NREL
Account to utilize the online allocation request system. If you need a HPC User Account, please request one online: Visit User Accounts. Click the green "Request Account" Button - this will direct . Follow the online instructions provided in the DocuSign form. Write "Need HPC User Account to use
COOPERATIVE ROUTING FOR DYNAMIC AERIAL LAYER NETWORKS
2018-03-01
Advisor, Computing & Communications Division Information Directorate This report is published in the interest of scientific and technical...information accumulation at the physical layer, and study the cooperative routing and resource allocation problems associated with such SU networks...interference power constraint is studied . In [Shi2012Joint], an optimal power and sub-carrier allocation strategy to maximize SUs’ throughput subject to
Allocation model for firefighting resources ... a progress report
Frederick W. Bratten
1970-01-01
A study is underway at the Pacific Southwest Forest and Range Experiment Station to develop computer techniques for planning suppression efforts in large wildfires. A mathematical model for allocation of firefighting resources in a going fire has been developed. Explicit definitions are given for strategic and tactical planning functions. How the model might be used is...
Allocating operating room block time using historical caseload variability.
Hosseini, Narges; Taaffe, Kevin M
2015-12-01
Operating room (OR) allocation and planning is one of the most important strategic decisions that OR managers face. The number of ORs that a hospital opens depends on the number of blocks that are allocated to the surgical groups, services, or individual surgeons, combined with the amount of open posting time (i.e., first come, first serve posting) that the hospital wants to provide. By allocating too few ORs, a hospital may turn away surgery demand whereas opening too many ORs could prove to be a costly decision. The traditional method of determining block frequency and size considers the average historical surgery demand for each group. However, given that there are penalties to the system for having too much or too little OR time allocated to a group, demand variability should play a role in determining the real OR requirement. In this paper we present an algorithm that allocates block time based on this demand variability, specifically accounting for both over-utilized time (time used beyond the block) and under-utilized time (time unused within the block). This algorithm provides a solution to the situation in which total caseload demand can be accommodated by the total OR resource set, in other words not in a capacity-constrained situation. We have found this scenario to be common among several regional healthcare providers with large OR suites and excess capacity. This algorithm could be used to adjust existing blocks or to assign new blocks to surgeons that did not previously have a block. We also have studied the effect of turnover time on the number of ORs that needs to be allocated. Numerical experiments based on real data from a large health-care provider indicate the opportunity to achieve over 2,900 hours of OR time savings through improved block allocations.
A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX
NASA Astrophysics Data System (ADS)
Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang
2010-07-01
In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.
Buying and Using Tomorrow's Computers in Today's Tertiary Institutions.
ERIC Educational Resources Information Center
Sungalia, Helen
1980-01-01
Higher-education administrators are alerted to the advent of the microprocessor and the capabilities of desk computers. The potential use of the microcomputer in administrative decision making, efficiency, and resource allocation are reviewed briefly. (MSE)
Nezarat, Amin; Dastghaibifard, GH
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer’s utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider. PMID:26431035
Nezarat, Amin; Dastghaibifard, G H
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sameer; Mamidala, Amith R.; Ratterman, Joseph D.
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a bather algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal tomore » the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blocksome, Michael; Kumar, Sameer; Mamidala, Amith R.
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a barrier algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal tomore » the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.« less
Kumar, Sameer; Mamidala, Amith R.; Ratterman, Joseph D.; Blocksome, Michael; Miller, Douglas
2013-09-03
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a bather algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal to the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.
Benda, Natalie C; Meadors, Margaret L; Hettinger, A Zachary; Ratwani, Raj M
2016-06-01
We evaluate how the transition from a homegrown electronic health record to a commercial one affects emergency physician work activities from initial introduction to long-term use. We completed a quasi-experimental study across 3 periods during the transition from a homegrown system to a commercially available electronic health record with computerized provider order entry. Observation periods consisted of pre-implementation, 1 month before the implementation of the commercial electronic health record; "go-live" 1 week after implementation; and post-implementation, 3 to 4 months after use began. Fourteen physicians were observed in each period (N=42) with a minute-by-minute observation template to record emergency physician time allocation across 5 task-based categories (computer, verbal communication, patient room, paper [chart/laboratory results], and other). The average number of tasks physicians engaged in per minute was also analyzed as an indicator of task switching. From pre- to post-implementation, there were no significant differences in the amount of time spent on the various task categories. There were changes in time allocation from pre-implementation to go-live and go-live to pre-implementation, characterized by a significant increase in time spent on computer tasks during go-live relative to the other periods. Critically, the number of tasks physicians engaged in per minute increased from 1.7 during pre-implementation to 1.9 during post-implementation (difference 0.19 tasks per minute; 95% confidence interval 0.039 to 0.35). The increase in the number of tasks physicians engaged in per minute post-implementation indicates that physicians switched tasks more frequently. Frequent task switching behavior raises patient safety concerns. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Bandwidth-sharing in LHCONE, an analysis of the problem
NASA Astrophysics Data System (ADS)
Wildish, T.
2015-12-01
The LHC experiments have traditionally regarded the network as an unreliable resource, one which was expected to be a major source of errors and inefficiency at the time their original computing models were derived. Now, however, the network is seen as much more capable and reliable. Data are routinely transferred with high efficiency and low latency to wherever computing or storage resources are available to use or manage them. Although there was sufficient network bandwidth for the experiments’ needs during Run-1, they cannot rely on ever-increasing bandwidth as a solution to their data-transfer needs in the future. Sooner or later they need to consider the network as a finite resource that they interact with to manage their traffic, in much the same way as they manage their use of disk and CPU resources. There are several possible ways for the experiments to integrate management of the network in their software stacks, such as the use of virtual circuits with hard bandwidth guarantees or soft real-time flow-control, with somewhat less firm guarantees. Abstractly, these can all be considered as the users (the experiments, or groups of users within the experiment) expressing a request for a given bandwidth between two points for a given duration of time. The network fabric then grants some allocation to each user, dependent on the sum of all requests and the sum of available resources, and attempts to ensure the requirements are met (either deterministically or statistically). An unresolved question at this time is how to convert the users’ requests into an allocation. Simply put, how do we decide what fraction of a network's bandwidth to allocate to each user when the sum of requests exceeds the available bandwidth? The usual problems of any resourcescheduling system arise here, namely how to ensure the resource is used efficiently and fairly, while still satisfying the needs of the users. Simply fixing quotas on network paths for each user is likely to lead to inefficient use of the network. If one user cannot use their quota for some reason, that bandwidth is lost. Likewise, there is no incentive for the user to be efficient within their quota, they have nothing to gain by using less than their allocation. As with CPU farms, some sort of dynamic allocation is more likely to be useful. A promising approach for sharing bandwidth at LHCONE is the ’Progressive Second-Price auction’, where users are given a budget and are required to bid from that budget for the specific resources they want to reserve. The auction allows users to effectively determine among themselves the degree of sharing they are willing to accept based on the priorities of their traffic and their global share, as represented by their total budget. The network then implements those allocations using whatever mix of technologies is appropriate or available. This paper describes how the Progressive Second-Price auction works and how it can be applied to LHCONE. Practical questions are addressed, such as how are budgets set, what strategy should users use to manage their budget, how and how often should the auction be run, and how do we ensure that the goals of fairness and efficiency are met.
Updated System-Availability and Resource-Allocation Program
NASA Technical Reports Server (NTRS)
Viterna, Larry
2004-01-01
A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.
Dynamic Resource Allocation in Disaster Response: Tradeoffs in Wildfire Suppression
Petrovic, Nada; Alderson, David L.; Carlson, Jean M.
2012-01-01
Challenges associated with the allocation of limited resources to mitigate the impact of natural disasters inspire fundamentally new theoretical questions for dynamic decision making in coupled human and natural systems. Wildfires are one of several types of disaster phenomena, including oil spills and disease epidemics, where (1) the disaster evolves on the same timescale as the response effort, and (2) delays in response can lead to increased disaster severity and thus greater demand for resources. We introduce a minimal stochastic process to represent wildfire progression that nonetheless accurately captures the heavy tailed statistical distribution of fire sizes observed in nature. We then couple this model for fire spread to a series of response models that isolate fundamental tradeoffs both in the strength and timing of response and also in division of limited resources across multiple competing suppression efforts. Using this framework, we compute optimal strategies for decision making scenarios that arise in fire response policy. PMID:22514605
40 CFR 96.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Timing requirements for CAIR NOX Ozone... PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season Allowance Allocations § 96.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) By October 31, 2006, the permitting authority...
40 CFR 97.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Timing requirements for CAIR NOX Ozone... TRADING PROGRAMS CAIR NOX Ozone Season Allowance Allocations § 97.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) The Administrator will determine by order the CAIR NOX Ozone...
40 CFR 97.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Timing requirements for CAIR NOX Ozone... TRADING PROGRAMS CAIR NOX Ozone Season Allowance Allocations § 97.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) The Administrator will determine by order the CAIR NOX Ozone...
40 CFR 96.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Timing requirements for CAIR NOX Ozone... PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season Allowance Allocations § 96.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) By October 31, 2006, the permitting authority...
40 CFR 96.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Timing requirements for CAIR NOX Ozone... PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season Allowance Allocations § 96.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) By October 31, 2006, the permitting authority...
40 CFR 96.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Timing requirements for CAIR NOX Ozone... PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season Allowance Allocations § 96.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) By October 31, 2006, the permitting authority...
40 CFR 97.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Timing requirements for CAIR NOX Ozone... TRADING PROGRAMS CAIR NOX Ozone Season Allowance Allocations § 97.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) The Administrator will determine by order the CAIR NOX Ozone...
40 CFR 97.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Timing requirements for CAIR NOX Ozone... TRADING PROGRAMS CAIR NOX Ozone Season Allowance Allocations § 97.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) The Administrator will determine by order the CAIR NOX Ozone...
40 CFR 97.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Timing requirements for CAIR NOX Ozone... TRADING PROGRAMS CAIR NOX Ozone Season Allowance Allocations § 97.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) The Administrator will determine by order the CAIR NOX Ozone...
40 CFR 96.341 - Timing requirements for CAIR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Timing requirements for CAIR NOX Ozone... PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season Allowance Allocations § 96.341 Timing requirements for CAIR NOX Ozone Season allowance allocations. (a) By October 31, 2006, the permitting authority...
ERIC Educational Resources Information Center
Anderson, Derrick M.; Slade, Catherine P.
2016-01-01
While much is known about faculty time allocation, we know very little about how traditional managerial factors influence faculty time allocation behaviors. We know even less about the possible downsides associated with relying on these traditional managerial factors. Using survey data from the National Science Foundation/Department of Energy…
A Time Allocation Study of University Faculty
ERIC Educational Resources Information Center
Link, Albert N.; Swann, Christopher A.; Bozeman, Barry
2008-01-01
Many previous time allocation studies treat work as a single activity and examine trade-offs between work and other activities. This paper investigates the at-work allocation of time among teaching, research, grant writing and service by science and engineering faculty at top US research universities. We focus on the relationship between tenure…
The Economics of Adolescents' Time Allocation: Evidence from the Young Agent Project in Brazil
ERIC Educational Resources Information Center
Martinez-Restrepo, Susana
2012-01-01
What are the socioeconomic implications of the time allocation decisions made by low-income adolescents? The way adolescents allocate their time between schooling, labor and leisure has important implications for their education attainment, college aspirations, job opportunities and future earnings. This study focuses on adolescents and young…
Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng
2017-01-01
In the emergency management relevant to chemical contingency spills, efficiency emergency rescue can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, an emergency material scheduling model (EMSM) with time-effective and cost-effective objectives is developed to coordinate both allocation and scheduling of the emergency materials. Meanwhile, an improved genetic algorithm (IGA) which includes a revision operation for EMSM is proposed to identify the emergency material scheduling schemes. Then, scenario analysis is used to evaluate optimal emergency rescue scheme under different emergency pollution conditions associated with different threat degrees based on analytic hierarchy process (AHP) method. The whole framework is then applied to a computational experiment based on south-to-north water transfer project in China. The results demonstrate that the developed method not only could guarantee the implementation of the emergency rescue to satisfy the requirements of chemical contingency spills but also help decision makers identify appropriate emergency material scheduling schemes in a balance between time-effective and cost-effective objectives.
ERIC Educational Resources Information Center
Martin-McCormick, Lynda; And Others
An advocacy packet on educational equity in computer education consists of five separate materials. A booklet entitled "Today's Guide to the Schools of the Future" contains four sections. The first section, a computer equity assessment guide, includes interview questions about school policies and allocation of resources, student and teacher…
Mathematical analysis and coordinated current allocation control in battery power module systems
NASA Astrophysics Data System (ADS)
Han, Weiji; Zhang, Liang
2017-12-01
As the major energy storage device and power supply source in numerous energy applications, such as solar panels, wind plants, and electric vehicles, battery systems often face the issue of charge imbalance among battery cells/modules, which can accelerate battery degradation, cause more energy loss, and even incur fire hazard. To tackle this issue, various circuit designs have been developed to enable charge equalization among battery cells/modules. Recently, the battery power module (BPM) design has emerged to be one of the promising solutions for its capability of independent control of individual battery cells/modules. In this paper, we propose a new current allocation method based on charging/discharging space (CDS) for performance control in BPM systems. Based on the proposed method, the properties of CDS-based current allocation with constant parameters are analyzed. Then, real-time external total power requirement is taken into account and an algorithm is developed for coordinated system performance control. By choosing appropriate control parameters, the desired system performance can be achieved by coordinating the module charge balance and total power efficiency. Besides, the proposed algorithm has complete analytical solutions, and thus is very computationally efficient. Finally, the efficacy of the proposed algorithm is demonstrated using simulations.
NASA Astrophysics Data System (ADS)
Moiroux, Joffrey; Abram, Paul K.; Louâpre, Philippe; Barrette, Maryse; Brodeur, Jacques; Boivin, Guy
2016-04-01
Patch time allocation has received much attention in the context of optimal foraging theory, including the effect of environmental variables. We investigated the direct role of temperature on patch time allocation by parasitoids through physiological and behavioural mechanisms and its indirect role via changes in sex allocation and behavioural defences of the hosts. We compared the influence of foraging temperature on patch residence time between an egg parasitoid, Trichogramma euproctidis, and an aphid parasitoid, Aphidius ervi. The latter attacks hosts that are able to actively defend themselves, and may thus indirectly influence patch time allocation of the parasitoid. Patch residence time decreased with an increase in temperature in both species. The increased activity levels with warming, as evidenced by the increase in walking speed, partially explained these variations, but other mechanisms were involved. In T. euproctidis, the ability to externally discriminate parasitised hosts decreased at low temperature, resulting in a longer patch residence time. Changes in sex allocation with temperature did not explain changes in patch time allocation in this species. For A. ervi, we observed that aphids frequently escaped at intermediate temperature and defended themselves aggressively at high temperature, but displayed few defence mechanisms at low temperature. These defensive behaviours resulted in a decreased patch residence time for the parasitoid and partly explained the fact that A. ervi remained for a shorter time at the intermediate and high temperatures than at the lowest temperature. Our results suggest that global warming may affect host-parasitoid interactions through complex mechanisms including both direct and indirect effects on parasitoid patch time allocation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P.; Martin, D.; Drugan, C.
2010-11-23
This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less
Oc, Burak; Bashshur, Michael R; Moore, Celia
2015-03-01
Subordinates are often seen as impotent, able to react to but not affect how powerholders treat them. Instead, we conceptualize subordinate feedback as an important trigger of powerholders' behavioral self-regulation and explore subordinates' reciprocal influence on how powerholders allocate resources to them over time. In 2 experiments using a multiparty, multiround dictator game paradigm, we found that when subordinates provided candid feedback about whether they found prior allocations to be fair or unfair, powerholders regulated how self-interested their allocations were over time. However, when subordinates provided compliant feedback about powerholders' prior allocation decisions (offered consistently positive feedback, regardless of the powerholders' prior allocation), those powerholders made increasingly self-interested allocations over time. In addition, we showed that guilt partially mediates this relationship: powerholders feel more guilty after receiving negative feedback about an allocation, subsequently leading to a less self-interested allocation, whereas they feel less guilty after receiving positive feedback about an allocation, subsequently taking more for themselves. Our findings integrate the literature on upward feedback with theory about moral self-regulation to support the idea that subordinates are an important source of influence over those who hold power over them. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Computer-Assisted Intervention for Children with Low Numeracy Skills
ERIC Educational Resources Information Center
Rasanen, Pekka; Salminen, Jonna; Wilson, Anna J.; Aunio, Pirjo; Dehaene, Stanislas
2009-01-01
We present results of a computer-assisted intervention (CAI) study on number skills in kindergarten children. Children with low numeracy skill (n = 30) were randomly allocated to two treatment groups. The first group played a computer game (The Number Race) which emphasized numerical comparison and was designed to train number sense, while the…
Allocation of Resources to Computer Support in Two-Year Colleges: 1979-80.
ERIC Educational Resources Information Center
Arth, Maurice P.
1982-01-01
Data on levels of computer-related expenditures for two-year colleges are presented. The data show institutions whether their computer-related expenditures, as a percentage of total operating expenditures and in dollars per credit headcount student, are high, medium, or low relative to expenditures of other similarly sized institutions. (MLW)
Job Priorities on Peregrine | High-Performance Computing | NREL
allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when
40 CFR 97.511 - Timing requirements for TR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Timing requirements for TR NOX Ozone... TRADING PROGRAMS TR NOX Ozone Season Trading Program § 97.511 Timing requirements for TR NOX Ozone Season allowance allocations. (a) Existing units. (1) TR NOX Ozone Season allowances are allocated, for the control...
40 CFR 97.511 - Timing requirements for TR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Timing requirements for TR NOX Ozone... TRADING PROGRAMS TR NOX Ozone Season Trading Program § 97.511 Timing requirements for TR NOX Ozone Season allowance allocations. (a) Existing units. (1) TR NOX Ozone Season allowances are allocated, for the control...
40 CFR 97.511 - Timing requirements for TR NOX Ozone Season allowance allocations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Timing requirements for TR NOX Ozone... TRADING PROGRAMS TR NOX Ozone Season Trading Program § 97.511 Timing requirements for TR NOX Ozone Season allowance allocations. (a) Existing units. (1) TR NOX Ozone Season allowances are allocated, for the control...
Improving User Notification on Frequently Changing HPC Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuson, Christopher B; Renaud, William A
2016-01-01
Today s HPC centers user environments can be very complex. Centers often contain multiple large complicated computational systems each with their own user environment. Changes to a system s environment can be very impactful; however, a center s user environment is, in one-way or another, frequently changing. Because of this, it is vital for centers to notify users of change. For users, untracked changes can be costly, resulting in unnecessary debug time as well as wasting valuable compute allocations and research time. Communicating frequent change to diverse user communities is a common and ongoing task for HPC centers. This papermore » will cover the OLCF s current processes and methods used to communicate change to users of the center s large Cray systems and supporting resources. The paper will share lessons learned and goals as well as practices, tools, and methods used to continually improve and reach members of the OLCF user community.« less
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Constant time worker thread allocation via configuration caching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichenberger, Alexandre E; O'Brien, John K. P.
Mechanisms are provided for allocating threads for execution of a parallel region of code. A request for allocation of worker threads to execute the parallel region of code is received from a master thread. Cached thread allocation information identifying prior thread allocations that have been performed for the master thread are accessed. Worker threads are allocated to the master thread based on the cached thread allocation information. The parallel region of code is executed using the allocated worker threads.
NASA Astrophysics Data System (ADS)
Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.
2015-11-01
Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
Space Network Control Conference on Resource Allocation Concepts and Approaches
NASA Technical Reports Server (NTRS)
Moe, Karen L. (Editor)
1991-01-01
The results are presented of the Space Network Control (SNC) Conference. In the late 1990s, when the Advanced Tracking and Data Relay Satellite System is operational, Space Network communication services will be supported and controlled by the SNC. The goals of the conference were to survey existing resource allocation concepts and approaches, to identify solutions applicable to the Space Network, and to identify avenues of study in support of the SNC development. The conference was divided into three sessions: (1) Concepts for Space Network Allocation; (2) SNC and User Payload Operations Control Center (POCC) Human-Computer Interface Concepts; and (3) Resource Allocation Tools, Technology, and Algorithms. Key recommendations addressed approaches to achieving higher levels of automation in the scheduling process.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Performance Evaluation in Network-Based Parallel Computing
NASA Technical Reports Server (NTRS)
Dezhgosha, Kamyar
1996-01-01
Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
11 CFR 9003.6 - Production of computer information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... legal and accounting services, including the allocation of payroll and overhead expenditures; (4..., ground services and facilities made available to media personnel, including records relating to how costs... explaining the computer system's software capabilities, such as user guides, technical manuals, formats...
DOT National Transportation Integrated Search
1997-04-01
The Land Use, Air Quality, and Transportation Integrated Modeling Environment (LATIME) represents an integrated approach to computer modeling and simulation of land use allocation, travel demand, and mobile source emissions for the Albuquerque, New M...
Parallel Scaling Characteristics of Selected NERSC User ProjectCodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skinner, David; Verdier, Francesca; Anand, Harsh
This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems.more » An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.« less
Optimizing Utilization of Detectors
2016-03-01
provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This
Nash Social Welfare in Multiagent Resource Allocation
NASA Astrophysics Data System (ADS)
Ramezani, Sara; Endriss, Ulle
We study different aspects of the multiagent resource allocation problem when the objective is to find an allocation that maximizes Nash social welfare, the product of the utilities of the individual agents. The Nash solution is an important welfare criterion that combines efficiency and fairness considerations. We show that the problem of finding an optimal outcome is NP-hard for a number of different languages for representing agent preferences; we establish new results regarding convergence to Nash-optimal outcomes in a distributed negotiation framework; and we design and test algorithms similar to those applied in combinatorial auctions for computing such an outcome directly.
Cloudbus Toolkit for Market-Oriented Cloud Computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian
This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.
What to do? The effects of discrepancies, incentives, and time on dynamic goal prioritization.
Schmidt, Aaron M; DeShon, Richard P
2007-07-01
This study examined factors that influence the dynamic pursuit of multiple goals over time. As hypothesized, goal-performance discrepancies were significantly related to subsequent time allocation. Greater distance from a given goal resulted in greater time subsequently allocated to that goal. In addition, the incentives offered for goal attainment determined the relative influence of discrepancies for each goal. When the incentives for each goal were equivalent, progress toward each goal exhibited equal influence, with greater time allocated to whichever goal was furthest from completion at the time. However, with an incentive available for only 1 of the 2 goals, time allocation was largely determined by progress toward the rewarded goal. Likewise, when incentives for each task differed in their approach-avoidance framing, progress toward the avoidance-framed goal was a stronger predictor of subsequent allocation than was progress toward the approach-framed goal. Finally, the influence of goal-performance discrepancies differed as a function of the time remaining for goal pursuit. The implications for future work on dynamic goal prioritization and the provision of performance incentives are discussed.
Makalu: fast recoverable allocation of non-volatile memory
Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.
2016-10-19
Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less
Makalu: fast recoverable allocation of non-volatile memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.
Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rafique, Rashid; Xia, Jianyang; Hararuk, Oleksandra
Land models are valuable tools to understand the dynamics of global carbon (C) cycle. Various models have been developed and used for predictions of future C dynamics but uncertainties still exist. Diagnosing the models’ behaviors in terms of structures can help to narrow down the uncertainties in prediction of C dynamics. In this study three widely used land surface models, namely CSIRO’s Atmosphere Biosphere Land Exchange (CABLE) with 9 C pools, Community Land Model (version 3.5) combined with Carnegie-Ames-Stanford Approach (CLM-CASA) with 12 C pools and Community Land Model (version 4) (CLM4) with 26 C pools were driven by themore » observed meteorological forcing. The simulated C storage and residence time were used for analysis. The C storage and residence time were computed globally for all individual soil and plant pools, as well as net primary productivity (NPP) and its allocation to different plant components’ based on these models. Remotely sensed NPP and statistically derived HWSD, and GLC2000 datasets were used as a reference to evaluate the performance of these models. Results showed that CABLE exhibited better agreement with referenced C storage and residence time for plant and soil pools, as compared with CLM-CASA and CLM4. CABLE had longer bulk residence time for soil C pools and stored more C in roots, whereas, CLM-CASA and CLM4 stored more C in woody pools due to differential NPP allocation. Overall, these results indicate that the differences in C storage and residence times in three models are largely due to the differences in their fundamental structures (number of C pools), NPP allocation and C transfer rates. Our results have implications in model development and provide a general framework to explain the bias/uncertainties in simulation of C storage and residence times from the perspectives of model structures.« less
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Multi-Objective Optimization for Trustworthy Tactical Networks: A Survey and Insights
2013-06-01
existing data sources, gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments regarding...problems: using repeated cooperative games [12], hedonic games [25], and nontransferable utility cooperative games [27]. It should be noted that trust...examined an optimal task allocation problem in a distributed computing system where program modules need to be allocated to different processors to
Patrol force allocation for law enforcement: An introductory planning guide
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Kennedy, R. D.
1976-01-01
Previous and current methods for analyzing police patrol forces are reviewed and discussed. The steps in developing an allocation analysis procedure are defined, including the prediction of the rate of calls for service, determination of the number of patrol units needed, designing sectors, and analyzing dispatch strategies. Existing computer programs used for this purpose are briefly described, and some results of their application are given.
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Guaranteeing synchronous message deadlines with the timed token medium access control protocol
NASA Technical Reports Server (NTRS)
Agrawal, Gopal; Chen, Baio; Zhao, Wei; Davari, Sadegh
1992-01-01
We study the problem of guaranteeing synchronous message deadlines in token ring networks where the timed token medium access control protocol is employed. Synchronous capacity, defined as the maximum time for which a node can transmit its synchronous messages every time it receives the token, is a key parameter in the control of synchronous message transmission. To ensure the transmission of synchronous messages before their deadlines, synchronous capacities must be properly allocated to individual nodes. We address the issue of appropriate allocation of the synchronous capacities. Several synchronous capacity allocation schemes are analyzed in terms of their ability to satisfy deadline constraints of synchronous messages. We show that an inappropriate allocation of the synchronous capacities could cause message deadlines to be missed even if the synchronous traffic is extremely low. We propose a scheme called the normalized proportional allocation scheme which can guarantee the synchronous message deadlines for synchronous traffic of up to 33 percent of available utilization. To date, no other synchronous capacity allocation scheme has been reported to achieve such substantial performance. Another major contribution of this paper is an extension to the previous work on the bounded token rotation time. We prove that the time elapsed between any consecutive visits to a particular node is bounded by upsilon TTRT, where TTRT is the target token rotation time set up at system initialization time. The previous result by Johnson and Sevcik is a special case where upsilon = 2. We use this result in the analysis of various synchronous allocation schemes. It can also be applied in other similar studies.
Time allocation and dietary habits in the United States: Time for re-evaluation?
Fiese, Barbara H
2018-02-21
In this non-exhaustive narrative review, time allocation and its relation to dietary habits are discussed. Drawing from reports relying on time use surveys, the amount of time dedicated to cooking and dining is found to be associated with health outcomes such as BMI and cardiovascular risk. Important modifiers include gender, race, ethnicity and household income. Perception of time intensity is also discussed. Individuals who perceive time pressure or strain may be less likely to engage in healthy food related activities and be at greater risk for poor health outcomes. Finally, the direct observation of allocation during meal occasions is discussed. The author calls for a socio-ecological approach to the study of time allocation and dietary habits in the United States and further consideration of direct observation of time use. Copyright © 2018 Elsevier Inc. All rights reserved.
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
Autonomous Reconfigurable Control Allocation (ARCA) for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hodel, A. S.; Callahan, Ronnie; Jackson, Scott (Technical Monitor)
2002-01-01
The role of control allocation (CA) in modern aerospace vehicles is to compute a command vector delta(sub c) is a member of IR(sup n(sub a)) that corresponding to commanded or desired body-frame torques (moments) tou(sub c) = [L M N](sup T) to the vehicle, compensating for and/or responding to inaccuracies in off-line nominal control allocation calculations, actuator failures and/or degradations (reduced effectiveness), or actuator limitations (rate/position saturation). The command vector delta(sub c) may govern the behavior of, e.g., acrosurfaces, reaction thrusters, engine gimbals and/or thrust vectoring. Typically, the individual moments generated in response to each of the n(sub a) commands does not lie strictly in the roll, pitch, or yaw axes, and so a common practice is to group or gang actuators so that a one-to-one mapping from torque commands tau(sub c) actuator commands delta(sub c) may be achieved in an off-line computed CA function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brigantic, Robert T.; Betzsold, Nick J.; Bakker, Craig KR
In this presentation we overview a methodology for dynamic security risk quantification and optimal resource allocation of security assets for high profile venues. This methodology is especially applicable to venues that require security screening operations such as mass transit (e.g., train or airport terminals), critical infrastructure protection (e.g., government buildings), and largescale public events (e.g., concerts or professional sports). The method starts by decomposing the three core components of risk -- threat, vulnerability, and consequence -- into their various subcomponents. For instance, vulnerability can be decomposed into availability, accessibility, organic security, and target hardness and each of these can bemore » evaluated against the potential threats of interest for the given venue. Once evaluated, these subcomponents are rolled back up to compute the specific value for the vulnerability core risk component. Likewise, the same is done for consequence and threat, and then risk is computed as the product of these three components. A key aspect of our methodology is dynamically quantifying risk. That is, we incorporate the ability to uniquely allow the subcomponents and core components, and in turn, risk, to be quantified as a continuous function of time throughout the day, week, month, or year as appropriate.« less
PACS 2000: quality control using the task allocation chart
NASA Astrophysics Data System (ADS)
Norton, Gary S.; Romlein, John R.; Lyche, David K.; Richardson, Ronald R., Jr.
2000-05-01
Medical imaging's technological evolution in the next century will continue to include Picture Archive and Communication Systems (PACS) and teleradiology. It is difficult to predict radiology's future in the new millennium with both computed radiography and direct digital capture competing as the primary image acquisition methods for routine radiography. Changes in Computed Axial Tomography (CT) and Magnetic Resonance Imaging (MRI) continue to amaze the healthcare community. No matter how the acquisition, display, and archive functions change, Quality Control (QC) of the radiographic imaging chain will remain an important step in the imaging process. The Task Allocation Chart (TAC) is a tool that can be used in a medical facility's QC process to indicate the testing responsibilities of the image stakeholders and the medical informatics department. The TAC shows a grid of equipment to be serviced, tasks to be performed, and the organization assigned to perform each task. Additionally, skills, tasks, time, and references for each task can be provided. QC of the PACS must be stressed as a primary element of a PACS' implementation. The TAC can be used to clarify responsibilities during warranty and paid maintenance periods. Establishing a TAC a part of a PACS implementation has a positive affect on patient care and clinical acceptance.
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
NASA Astrophysics Data System (ADS)
Abbasi, Madiha; Imran Baig, Mirza; Shafique Shaikh, Muhammad
2013-12-01
At present existence OTDR based techniques have become a standard practice for measuring chromatic dispersion distribution along an optical fiber transmission link. A constructive measurement technique has been offered in this paper, in which a four wavelength bidirectional optical time domain reflectometer (OTDR) has been used to compute the chromatic dispersion allocation beside an optical fiber transmission system. To improve the correction factor a novel formulation has been developed, which leads to an enhanced and defined measurement. The investigational outcomes obtained are in good harmony.
Power allocation and range performance considerations for a dual-frequency EBPSK/MPPSK system
NASA Astrophysics Data System (ADS)
Yao, Yu; Wu, Lenan; Zhao, Junhui
2017-12-01
Extended binary phase shift keying/M-ary position phase shift keying (EBPSK/MPPSK)-MODEM provides radar and communication functions on a single hardware platform with a single waveform. However, its range estimation accuracy is worse than continuous-wave (CW) radar because of the imbalance of power in two carrier frequencies. In this article, the power allocation method for dual-frequency EBPSK/MPPSK modulated systems is presented. The power of two signal transmitters is adequately allocated to ensure that the power in two carrier frequencies is equal. The power allocation ratios for two types of modulation systems are obtained. Moreover, considerations regarding the range of operation of the dual-frequency system are analysed. In addition to theoretical considerations, computer simulations are provided to illustrate the performance.
Zere, Eyob; Mandlhate, Custodia; Mbeeli, Thomas; Shangula, Kalumbi; Mutirua, Kauto; Kapenambili, William
2007-01-01
Background The pace of redressing inequities in the distribution of scarce health care resources in Namibia has been slow. This is due primarily to adherence to the historical incrementalist type of budgeting that has been used to allocate resources. Those regions with high levels of deprivation and relatively greater need for health care resources have been getting less than their fair share. To rectify this situation, which was inherited from the apartheid system, there is a need to develop a needs-based resource allocation mechanism. Methods Principal components analysis was employed to compute asset indices from asset based and health-related variables, using data from the Namibia demographic and health survey of 2000. The asset indices then formed the basis of proposals for regional weights for establishing a needs-based resource allocation formula. Results Comparing the current allocations of public sector health car resources with estimates using a needs based formula showed that regions with higher levels of need currently receive fewer resources than do regions with lower need. Conclusion To address the prevailing inequities in resource allocation, the Ministry of Health and Social Services should abandon the historical incrementalist method of budgeting/resource allocation and adopt a more appropriate allocation mechanism that incorporates measures of need for health care. PMID:17391533
Zere, Eyob; Mandlhate, Custodia; Mbeeli, Thomas; Shangula, Kalumbi; Mutirua, Kauto; Kapenambili, William
2007-03-29
The pace of redressing inequities in the distribution of scarce health care resources in Namibia has been slow. This is due primarily to adherence to the historical incrementalist type of budgeting that has been used to allocate resources. Those regions with high levels of deprivation and relatively greater need for health care resources have been getting less than their fair share. To rectify this situation, which was inherited from the apartheid system, there is a need to develop a needs-based resource allocation mechanism. Principal components analysis was employed to compute asset indices from asset based and health-related variables, using data from the Namibia demographic and health survey of 2000. The asset indices then formed the basis of proposals for regional weights for establishing a needs-based resource allocation formula. Comparing the current allocations of public sector health car resources with estimates using a needs based formula showed that regions with higher levels of need currently receive fewer resources than do regions with lower need. To address the prevailing inequities in resource allocation, the Ministry of Health and Social Services should abandon the historical incrementalist method of budgeting/resource allocation and adopt a more appropriate allocation mechanism that incorporates measures of need for health care.
Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry
2011-05-01
The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paris, Mark W.
The current one-year project allocation (w17 burst) supports the continuation of research performed in the two-year Institutional Computing allocation (w14 bigbangnucleosynthesis). The project has supported development and production runs resulting in several publications[1, 2, 3, 4] in peer-review journals and talks. Most signi cantly, we have recently achieved a signi cant improvement in code performance. This improvement was essential to the prospect of making further progress on this heretofore unsolved multiphysics problem that lies at the intersection of nuclear and particle theory and the kinetic theory of energy transport in a system with internal (quantum) degrees of freedom.
Community-aware task allocation for social networked multiagent systems.
Wang, Wanyuan; Jiang, Yichuan
2014-09-01
In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.
Distributed storage and cloud computing: a test case
NASA Astrophysics Data System (ADS)
Piano, S.; Delia Ricca, G.
2014-06-01
Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.
Xu, Qun; Wang, Xianchao; Xu, Chao
2017-06-01
Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.
NASA Astrophysics Data System (ADS)
Yim, Keun Soo
This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.
Optimizing 4DCBCT projection allocation to respiratory bins.
O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J
2014-10-07
4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.
2016-01-01
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...
2016-11-18
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Overview of ASC Capability Computing System Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott W.
This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.
Home Computer Use and the Development of Human Capital. NBER Working Paper No. 15814
ERIC Educational Resources Information Center
Malamud, Ofer; Pop-Eleches, Cristian
2010-01-01
This paper uses a regression discontinuity design to estimate the effect of home computers on child and adolescent outcomes. We collected survey data from households who participated in a unique government program in Romania which allocated vouchers for the purchase of a home computer to low-income children based on a simple ranking of family…
Playing Games with Optimal Competitive Scheduling
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen
2005-01-01
This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, selfish preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource.
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
Belke, T W; Belliveau, J
2001-05-01
Six male Wistar rats were exposed to concurrent variable-interval schedules of wheel-running reinforcement. The reinforcer associated with each alternative was the opportunity to run for 15 s, and the duration of the changeover delay was 1 s. Results suggested that time allocation was more sensitive to relative reinforcement rate than was response allocation. For time allocation, the mean slopes and intercepts were 0.82 and 0.008, respectively. In contrast, for response allocation, mean slopes and intercepts were 0.60 and 0.03, respectively. Correction for low response rates and high rates of changing over, however, increased slopes for response allocation to about equal those for time allocation. The results of the present study suggest that the two-operant form of the matching law can be extended to wheel-running reinforcement. 'I'he effects of a low overall response rate, a short Changeover delay, and long postreinforcement pausing on the assessment of matching in the present study are discussed.
Clustering of financial time series with application to index and enhanced index tracking portfolio
NASA Astrophysics Data System (ADS)
Dose, Christian; Cincotti, Silvano
2005-09-01
A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.
The Requirements Generation System: A tool for managing mission requirements
NASA Technical Reports Server (NTRS)
Sheppard, Sylvia B.
1994-01-01
Historically, NASA's cost for developing mission requirements has been a significant part of a mission's budget. Large amounts of time have been allocated in mission schedules for the development and review of requirements by the many groups who are associated with a mission. Additionally, tracing requirements from a current document to a parent document has been time-consuming and costly. The Requirements Generation System (RGS) is a computer-supported cooperative-work tool that assists mission developers in the online creation, review, editing, tracing, and approval of mission requirements as well as in the production of requirements documents. This paper describes the RGS and discusses some lessons learned during its development.
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.
1993-02-01
the (re)planning framework, incorporating the demonstrators CALIGULA and ALLOCATOR for resource allocation and scheduling respectively. In the Command...demonstrator CALIGULA for the problem of allocating frequencies to a radio link network. The problems in the domain of scheduling are dealt with. which has...demonstrating the (re)planning framework, incorporating the demonstrators CALIGULA and ALLOCATOR for resource allocation and scheduling respectively
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization
Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.
Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.
Theoretical Definition of Instructor Role in Computer-Managed Instruction.
ERIC Educational Resources Information Center
McCombs, Barbara L.; Dobrovolny, Jacqueline L.
This report describes the results of a theoretical analysis of the ideal role functions of the Computer Managed Instruction (CMI) instructor. Concepts relevant to instructor behavior are synthesized from both cognitive and operant learning theory perspectives, and the roles allocated to instructors by seven large-scale operational CMI systems are…
Spectrum/Orbit-Utilization Program
NASA Technical Reports Server (NTRS)
Miller, Edward F.; Sawitz, Paul; Zusman, Fred
1988-01-01
Interferences among geostationary satellites determine allocations. Spectrum/Orbit Utilization Program (SOUP) is analytical computer program for determining mutual interferences among geostationary-satellite communication systems operating in given scenario. Major computed outputs are carrier-to-interference ratios at receivers at specified stations on Earth. Information enables determination of acceptability of planned communication systems. Written in FORTRAN.
Soldier Decision-Making for Allocation of Intelligence, Surveillance, and Reconnaissance Assets
2014-06-01
Judgments; also called Algoritmic or Statistical Judgements Computer Science , Psychology, and Statistics Actuarial or algorithmic...Jan. 2011. [17] R. M. Dawes, D. Faust, and P. E. Meehl, “Clinical versus Actuarial Judgment,” Science , vol. 243, no. 4899, pp. 1668–1674, 1989. [18...School of Computer Science
Digital Stratigraphy: Contextual Analysis of File System Traces in Forensic Science.
Casey, Eoghan
2017-12-28
This work introduces novel methods for conducting forensic analysis of file allocation traces, collectively called digital stratigraphy. These in-depth forensic analysis methods can provide insight into the origin, composition, distribution, and time frame of strata within storage media. Using case examples and empirical studies, this paper illuminates the successes, challenges, and limitations of digital stratigraphy. This study also shows how understanding file allocation methods can provide insight into concealment activities and how real-world computer usage can complicate digital stratigraphy. Furthermore, this work explains how forensic analysts have misinterpreted traces of normal file system behavior as indications of concealment activities. This work raises awareness of the value of taking the overall context into account when analyzing file system traces. This work calls for further research in this area and for forensic tools to provide necessary information for such contextual analysis, such as highlighting mass deletion, mass copying, and potential backdating. © 2017 American Academy of Forensic Sciences.
Networked Rectenna Array for Smart Material Actuators
NASA Technical Reports Server (NTRS)
Choi, Sang H.; Golembiewski, Walter T.; Song, Kyo D.
2000-01-01
The concept of microwave-driven smart material actuators is envisioned as the best option to alleviate the complexity associated with hard-wired control circuitry. Networked rectenna patch array receives and converts microwave power into a DC power for an array of smart actuators. To use microwave power effectively, the concept of a power allocation and distribution (PAD) circuit is adopted for networking a rectenna/actuator patch array. The PAD circuit is imbedded into a single embodiment of rectenna and actuator array. The thin-film microcircuit embodiment of PAD circuit adds insignificant amount of rigidity to membrane flexibility. Preliminary design and fabrication of PAD circuitry that consists of a few nodal elements were made for laboratory testing. The networked actuators were tested to correlate the network coupling effect, power allocation and distribution, and response time. The features of preliminary design are 16-channel computer control of actuators by a PCI board and the compensator for a power failure or leakage of one or more rectennas.
Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties.
Sinsky, Christine; Colligan, Lacey; Li, Ling; Prgomet, Mirela; Reynolds, Sam; Goeders, Lindsey; Westbrook, Johanna; Tutty, Michael; Blike, George
2016-12-06
Little is known about how physician time is allocated in ambulatory care. To describe how physician time is spent in ambulatory practice. Quantitative direct observational time and motion study (during office hours) and self-reported diary (after hours). U.S. ambulatory care in 4 specialties in 4 states (Illinois, New Hampshire, Virginia, and Washington). 57 U.S. physicians in family medicine, internal medicine, cardiology, and orthopedics who were observed for 430 hours, 21 of whom also completed after-hours diaries. Proportions of time spent on 4 activities (direct clinical face time, electronic health record [EHR] and desk work, administrative tasks, and other tasks) and self-reported after-hours work. During the office day, physicians spent 27.0% of their total time on direct clinical face time with patients and 49.2% of their time on EHR and desk work. While in the examination room with patients, physicians spent 52.9% of the time on direct clinical face time and 37.0% on EHR and desk work. The 21 physicians who completed after-hours diaries reported 1 to 2 hours of after-hours work each night, devoted mostly to EHR tasks. Data were gathered in self-selected, high-performing practices and may not be generalizable to other settings. The descriptive study design did not support formal statistical comparisons by physician and practice characteristics. For every hour physicians provide direct clinical face time to patients, nearly 2 additional hours is spent on EHR and desk work within the clinic day. Outside office hours, physicians spend another 1 to 2 hours of personal time each night doing additional computer and other clerical work. American Medical Association.
Zhang, Hang; Wu, Shih-Wei; Maloney, Laurence T.
2010-01-01
S.-W. Wu, M. F. Dal Martello, and L. T. Maloney (2009) evaluated subjects' performance in a visuo-motor task where subjects were asked to hit two targets in sequence within a fixed time limit. Hitting targets earned rewards and Wu et al. varied rewards associated with targets. They found that subjects failed to maximize expected gain; they failed to invest more time in the movement to the more valuable target. What could explain this lack of response to reward? We first considered the possibility that subjects require training in allocating time between two movements. In Experiment 1, we found that, after extensive training, subjects still failed: They did not vary time allocation with changes in payoff. However, their actual gains equaled or exceeded the expected gain of an ideal time allocator, indicating that constraining time itself has a cost for motor accuracy. In a second experiment, we found that movements made under externally imposed time limits were less accurate than movements made with the same timing freely selected by the mover. Constrained time allocation cost about 17% in expected gain. These results suggest that there is no single speed–accuracy tradeoff for movement in our task and that subjects pursued different motor strategies with distinct speed–accuracy tradeoffs in different conditions. PMID:20884550
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Kotler, Burt P.; Brown, Joel; Mukherjee, Shomen; Berger-Tal, Oded; Bouskila, Amos
2010-01-01
Foraging animals have several tools for managing the risk of predation, and the foraging games between them and their predators. Among these, time allocation is foremost, followed by vigilance and apprehension. Together, their use influences a forager's time allocation and giving-up density (GUD) in depletable resource patches. We examined Allenby's gerbils (Gerbilus andersoni allenbyi) exploiting seed resource patches in a large vivarium under varying moon phases in the presence of a red fox (Vulpes vulpes). We measured time allocated to foraging patches electronically and GUDs from seeds left behind in resource patches. From these, we estimated handling times, attack rates and quitting harvest rates (QHRs). Gerbils displayed greater vigilance (lower attack rates) at brighter moon phases (full < wane < wax < new). Similarly, they displayed higher GUDs at brighter moon phases (wax > full > new > wane). Finally, gerbils displayed higher QHRs at new and waxing moon phases. Differences across moon phases not only reflect changing time allocation and vigilance, but changes in the state of the foragers and their marginal value of energy. Early in the lunar cycle, gerbils rely on vigilance and sacrifice state to avoid risk; later they defend state at the cost of increased time allocation; finally their state can recover as safe opportunities expand. In the predator–prey foraging game, foxes may contribute to these patterns of behaviours by modulating their own activity in response to the opportunities presented in each moon phase. PMID:20053649
Kotler, Burt P; Brown, Joel; Mukherjee, Shomen; Berger-Tal, Oded; Bouskila, Amos
2010-05-22
Foraging animals have several tools for managing the risk of predation, and the foraging games between them and their predators. Among these, time allocation is foremost, followed by vigilance and apprehension. Together, their use influences a forager's time allocation and giving-up density (GUD) in depletable resource patches. We examined Allenby's gerbils (Gerbilus andersoni allenbyi) exploiting seed resource patches in a large vivarium under varying moon phases in the presence of a red fox (Vulpes vulpes). We measured time allocated to foraging patches electronically and GUDs from seeds left behind in resource patches. From these, we estimated handling times, attack rates and quitting harvest rates (QHRs). Gerbils displayed greater vigilance (lower attack rates) at brighter moon phases (full < wane < wax < new). Similarly, they displayed higher GUDs at brighter moon phases (wax > full > new > wane). Finally, gerbils displayed higher QHRs at new and waxing moon phases. Differences across moon phases not only reflect changing time allocation and vigilance, but changes in the state of the foragers and their marginal value of energy. Early in the lunar cycle, gerbils rely on vigilance and sacrifice state to avoid risk; later they defend state at the cost of increased time allocation; finally their state can recover as safe opportunities expand. In the predator-prey foraging game, foxes may contribute to these patterns of behaviours by modulating their own activity in response to the opportunities presented in each moon phase.
Comparing methodologies for the allocation of overhead and capital costs to hospital services.
Tan, Siok Swan; van Ineveld, Bastianus Martinus; Redekop, William Ken; Hakkaart-van Roijen, Leona
2009-06-01
Typically, little consideration is given to the allocation of indirect costs (overheads and capital) to hospital services, compared to the allocation of direct costs. Weighted service allocation is believed to provide the most accurate indirect cost estimation, but the method is time consuming. To determine whether hourly rate, inpatient day, and marginal mark-up allocation are reliable alternatives for weighted service allocation. The cost approaches were compared independently for appendectomy, hip replacement, cataract, and stroke in representative general hospitals in The Netherlands for 2005. Hourly rate allocation and inpatient day allocation produce estimates that are not significantly different from weighted service allocation. Hourly rate allocation may be a strong alternative to weighted service allocation for hospital services with a relatively short inpatient stay. The use of inpatient day allocation would likely most closely reflect the indirect cost estimates obtained by the weighted service method.
40 CFR 60.4141 - Timing requirements for Hg allowance allocations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the applicable control period is in 2018, the Administrator will assume that the allocations equal the...,000 ounces/ton) of Hg emissions in the applicable State trading budget under § 60.4140 for 2018 and... period is in 2018, the Administrator will assume that the allocations equal the allocations for the...
40 CFR 60.4141 - Timing requirements for Hg allowance allocations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the applicable control period is in 2018, the Administrator will assume that the allocations equal the...,000 ounces/ton) of Hg emissions in the applicable State trading budget under § 60.4140 for 2018 and... period is in 2018, the Administrator will assume that the allocations equal the allocations for the...
A Concept for Run-Time Support of the Chapel Language
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.
Enabling Real-time Water Decision Support Services Using Model as a Service
NASA Astrophysics Data System (ADS)
Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.
2014-12-01
Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.
A market-based optimization approach to sensor and resource management
NASA Astrophysics Data System (ADS)
Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.
2006-05-01
Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing
NASA Astrophysics Data System (ADS)
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C.; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
Time Use in Massachusetts Expanded Learning Time (ELT) Schools: Issue Brief
ERIC Educational Resources Information Center
Caven, Meghan; Checkoway, Amy; Gamse, Beth; Wu, Sally
2012-01-01
Expanded learning time seems to be a simple idea: by lengthening the school day (or year), students have more time to learn. Yet as schools revisit their schedules and decide how to allocate time in their academic calendars, they can and do face challenging decisions related to time allocations. This brief highlights lessons learned from some…
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation
D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen
1980-01-01
Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.
DOSESCREEN: a computer program to aid dose placement
Kimberly C. Smith; Jacqueline L. Robertson
1984-01-01
Careful selection of an experimental design for a bioassay substantially improves the precision of effective dose (ED) estimates. Design considerations typically include determination of sample size, dose selection, and allocation of subjects to doses. DOSESCREEN is a computer program written to help investigators select an efficient design for the estimation of an...
Shared Storage Usage Policy | High-Performance Computing | NREL
Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project
48 CFR 9904.410-60 - Illustrations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...
48 CFR 9904.410-60 - Illustrations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...
Lessons learned from a regional strategy for resource allocation.
Edwards, Janine C; Stapley, Jonathan; Akins, Ralitsa; Silenas, Rasa; Williams, Josie R
2005-01-01
Two qualitative case studies focus on the allocation of CDC funds distributed during 2002 for bioterrorism preparedness in two Texas public health regions (each as populous and complex as many states). Lessons learned are presented for public health officials and others who work to build essential public health services and security for our nation. The first lesson is that personal relationships are the cornerstone of preparedness. A major lesson is that a regional strategy to manage funds may be more effective than allocating funds on a per capita basis. One regional director required every local department to complete a strategic plan as a basis for proportional allocation of the funds. Control of communicable diseases was a central component of the planning. Some funds were kept at the regional level to provide epidemiology services, computer software, equipment, and training for the entire region. Confirmation of the value of this regional strategy was expressed by local public health and emergency management officials in a focus group 1 year after the strategy had been implemented. The group members also pointed out the need to streamline the planning process, provide up-to-date computer networks, and receive more than minimal communication. This regional strategy can be viewed from the perspective of adaptive leadership, defined as activities to bring about constructive change, which also can be used to analyze other difficult areas of preparedness.
Integrating Information Technologies Into Large Organizations
NASA Technical Reports Server (NTRS)
Gottlich, Gretchen; Meyer, John M.; Nelson, Michael L.; Bianco, David J.
1997-01-01
NASA Langley Research Center's product is aerospace research information. To this end, Langley uses information technology tools in three distinct ways. First, information technology tools are used in the production of information via computation, analysis, data collection and reduction. Second, information technology tools assist in streamlining business processes, particularly those that are primarily communication based. By applying these information tools to administrative activities, Langley spends fewer resources on managing itself and can allocate more resources for research. Third, Langley uses information technology tools to disseminate its aerospace research information, resulting in faster turn around time from the laboratory to the end-customer.
Challenges and opportunities of cloud computing for atmospheric sciences
NASA Astrophysics Data System (ADS)
Pérez Montes, Diego A.; Añel, Juan A.; Pena, Tomás F.; Wallom, David C. H.
2016-04-01
Cloud computing is an emerging technological solution widely used in many fields. Initially developed as a flexible way of managing peak demand it has began to make its way in scientific research. One of the greatest advantages of cloud computing for scientific research is independence of having access to a large cyberinfrastructure to fund or perform a research project. Cloud computing can avoid maintenance expenses for large supercomputers and has the potential to 'democratize' the access to high-performance computing, giving flexibility to funding bodies for allocating budgets for the computational costs associated with a project. Two of the most challenging problems in atmospheric sciences are computational cost and uncertainty in meteorological forecasting and climate projections. Both problems are closely related. Usually uncertainty can be reduced with the availability of computational resources to better reproduce a phenomenon or to perform a larger number of experiments. Here we expose results of the application of cloud computing resources for climate modeling using cloud computing infrastructures of three major vendors and two climate models. We show how the cloud infrastructure compares in performance to traditional supercomputers and how it provides the capability to complete experiments in shorter periods of time. The monetary cost associated is also analyzed. Finally we discuss the future potential of this technology for meteorological and climatological applications, both from the point of view of operational use and research.
Contrarian behavior in a complex adaptive system
NASA Astrophysics Data System (ADS)
Liang, Y.; An, K. N.; Yang, G.; Huang, J. P.
2013-01-01
Contrarian behavior is a kind of self-organization in complex adaptive systems (CASs). Here we report the existence of a transition point in a model resource-allocation CAS with contrarian behavior by using human experiments, computer simulations, and theoretical analysis. The resource ratio and system predictability serve as the tuning parameter and order parameter, respectively. The transition point helps to reveal the positive or negative role of contrarian behavior. This finding is in contrast to the common belief that contrarian behavior always has a positive role in resource allocation, say, stabilizing resource allocation by shrinking the redundancy or the lack of resources. It is further shown that resource allocation can be optimized at the transition point by adding an appropriate size of contrarians. This work is also expected to be of value to some other fields ranging from management and social science to ecology and evolution.
Nakrani, Sunil; Tovey, Craig
2007-12-01
An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.
Resource allocation decisions in low-income rural households.
Franklin, D L; Harrell, M W
1985-05-01
This paper is based on the theory that a society's nutritional well-being is both a cause and a consequence of the developmental process within that society. An approach to the choices made by poor rural households regarding food acquisition and nurturing behavior is emerging from recent research based on the new economic theory of household production. The central thesis of this approach is that household decisions related to the fulfillment of basic needs are strongly determined by decisions on the allocation of time to household production activities. Summarized are the results of the estimation of a model of household production and consumption behavior with data from a cross-sectional survey of 30 rural communities in Veraguas Province, Panama. The struture of the model consists of allocation of resources to nurturing activities and to production activities. The resources to be allocated are time and market goods, and in theory, these are allocated according to relative prices. The empirical results of this study are generally consistent with the predictions of the neoclassical economic model of household resource allocation. The major conclusions that time allocations and market price conditions matter in the determination of well-being in low-income rural households and, importantly, that nurturing decisions significantly affect the product and factor market behavior of these households form the basis for a discussion on implucations for agricultural and rural development. Programs and policies that seek nutritional improvement should be determined with explicit recognition of the value of time and the importance of timing in the decisions of the poor.
Compilation of Abstracts of Theses Submitted by Candidates for Degrees.
1986-09-30
Musitano, J.R. Fin-line Horn Antennas 118 LCDR, USNR Muth, L.R. VLSI Tutorials Through the 119 LT, USN Video -computer Courseware Implementation...Engineer Allocation 432 CPT, USA Model Kiziltan, M. Cognitive Performance Degrada- 433 LTJG, Turkish Navy tion on Sonar Operator and Tor- pedo Data...and Computer Engineering 118 VLSI TUTORIALS THROUGH THE VIDEO -COMPUTER COURSEWARE IMPLEMENTATION SYSTEM Liesel R. Muth Lieutenant, United States Navy
NASA Astrophysics Data System (ADS)
Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal
2017-05-01
Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.
CO-FIRING COAL: FEEDLOT AND LITTER BIOMASS FUELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Kalyan Annamalai; Dr. John Sweeten; Dr. Sayeed Mukhtar
2000-10-24
The following are proposed activities for quarter 1 (6/15/00-9/14/00): (1) Finalize the allocation of funds within TAMU to co-principal investigators and the final task lists; (2) Acquire 3 D computer code for coal combustion and modify for cofiring Coal:Feedlot biomass and Coal:Litter biomass fuels; (3) Develop a simple one dimensional model for fixed bed gasifier cofired with coal:biomass fuels; and (4) Prepare the boiler burner for reburn tests with feedlot biomass fuels. The following were achieved During Quarter 5 (6/15/00-9/14/00): (1) Funds are being allocated to co-principal investigators; task list from Prof. Mukhtar has been received (Appendix A); (2) Ordermore » has been placed to acquire Pulverized Coal gasification and Combustion 3 D (PCGC-3) computer code for coal combustion and modify for cofiring Coal: Feedlot biomass and Coal: Litter biomass fuels. Reason for selecting this code is the availability of source code for modification to include biomass fuels; (3) A simplified one-dimensional model has been developed; however convergence had not yet been achieved; and (4) The length of the boiler burner has been increased to increase the residence time. A premixed propane burner has been installed to simulate coal combustion gases. First coal, as a reburn fuel will be used to generate base line data followed by methane, feedlot and litter biomass fuels.« less
Jepson, Marcus; Elliott, Daisy; Conefrey, Carmel; Wade, Julia; Rooshenas, Leila; Wilson, Caroline; Beard, David; Blazeby, Jane M; Birtle, Alison; Halliday, Alison; Stein, Rob; Donovan, Jenny L
2018-07-01
To explore how the concept of randomization is described by clinicians and understood by patients in randomized controlled trials (RCTs) and how it contributes to patient understanding and recruitment. Qualitative analysis of 73 audio recordings of recruitment consultations from five, multicenter, UK-based RCTs with identified or anticipated recruitment difficulties. One in 10 appointments did not include any mention of randomization. Most included a description of the method or process of allocation. Descriptions often made reference to gambling-related metaphors or similes, or referred to allocation by a computer. Where reference was made to a computer, some patients assumed that they would receive the treatment that was "best for them". Descriptions of the rationale for randomization were rarely present and often only came about as a consequence of patients questioning the reason for a random allocation. The methods and processes of randomization were usually described by recruiters, but often without clarity, which could lead to patient misunderstanding. The rationale for randomization was rarely mentioned. Recruiters should avoid problematic gambling metaphors and illusions of agency in their explanations and instead focus on clearer descriptions of the rationale and method of randomization to ensure patients are better informed about randomization and RCT participation. Copyright © 2018 University of Bristol. Published by Elsevier Inc. All rights reserved.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
1986-02-01
the area of Artificial Intelligence (At). DARPA’s Strategic Computing Program 13 developing an At ýtchnology base upon which several applications...technologies with the Strategic Computing Program . In late 1983 the Strategic Computing Program (SCP) wes announced. The program was organizsd to develop...solving a resource allocation problem. The remainder of this paper will discuss the TEMPLAR progeam as it relates to the Strategic Computing Program
ERIC Educational Resources Information Center
Ariel, Robert
2013-01-01
Learners typically allocate more resources to learning items that are higher in value than they do to items lower in value. For instance, when items vary in point value for learning, participants allocate more study time to the higher point items than they do to the lower point items. The current experiments extend this research to a context where…
Time allocation and cultural complexity: leisure time use across twelve cultures
Garry Chick; Sharon Xiangyou Shen
2008-01-01
This study is part of an effort to understand the effect of cultural evolution on leisure time through comparing time use across 12 cultures. We used an existing dataset initially collected by researchers affiliated with the UCLA Time Allocation Project (1987-1997), which contains behavioral data coded with standard methods from twelve native lowland Amazonian...
NASA Astrophysics Data System (ADS)
Kutt, P. H.; Balamuth, D. P.
1989-10-01
Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.
Parametric State Space Structuring
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Tilgner, Marco
1997-01-01
Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.
Music and astronomy. II. Unitedsoundofcosmos
NASA Astrophysics Data System (ADS)
Caballero, J. A.; Arias, A.; Machuca, J. J.; Morente, S.
2017-03-01
We have been congratulated on the stage by a Nobel laureate (he was our curtain raiser), shocked audiences in rock concerts, written monthly on Musica Universalis , made the second concert in 3D in Spain after Kraftwerk and broadcasted it live in Radio 3, mixed our music with poetry read aloud by scientists, composed the soundtracks of CARMENES, QUIJOTE, ESTRACK and the Gaia first data release, made a videoclip on how computer simulates the formation of stars, played our music in planetariums, museums, observatories throughout Spain and at the end of the meeting of the ESO telescopes time allocation committee... All those moments will not be lost in time like tears in rain, but put together in Bilbao during the 2016 meeting of the Spanish Astronomical Society.
Faculty Time Allocations and Research Productivity: Gender, Race, and Family Effects.
ERIC Educational Resources Information Center
Bellas, Marcia L.; Toutkoushian, Robert K.
1999-01-01
A study using data from 14,614 full-time faculty examined total work hours, research productivity, and allocation of work time among teaching, research, and service. The study found variation in time expenditures and research output influenced by gender, race/ethnicity, and marital/parental status, but findings were also sensitive to definitions…
ERIC Educational Resources Information Center
Mederer, Helen J.
1993-01-01
Data from 359 married, full-time employed women tested extent to which allocation of tasks and allocation of household management predict perceptions of fairness and conflict. Task and management allocation contributed independently and differently to perceptions of fairness and conflict about housework allocation. Unfairness was predicted by both…
Extreme D'Hondt and round-off effects in voting computations
NASA Astrophysics Data System (ADS)
Konstantinov, M. M.; Pelova, G. B.
2015-11-01
D'Hondt (or Jefferson) method and Hare-Niemeyer (or Hamilton) method are widely used worldwide for seat allocation in proportional systems. Everything seems to be well known in this area. However, this is not the case. For example the D'Hondt method can violate the quota rule from above but this effect is not analyzed as a function of the number of parties and/or the threshold used. Also, allocation methods are often implemented automatically as computer codes in machine arithmetic believing that following the IEEE standards for double precision binary arithmetics would guarantee correct results. Unfortunately this may not happen not only for double precision arithmetic (usually producing 15-16 true decimal digits) but also for any relative precision of the underlying binary machine arithmetics. This paper deals with the following new issues.Find conditions (threshold in particular) such that D'Hondt seat allocation violates maximally the quota rule. Analyze possible influence of rounding errors in the automatic implementation of Hare-Niemeyer method in machine arithmetic.Concerning the first issue, it is known that the maximal deviation of D'Hondt allocation from upper quota for the Bulgarian proportional system (240 MP and 4% barrier) is 5. This fact had been established in 1991. A classical treatment of voting issues is the monograph [1], while electoral problems specific for Bulgaria have been treated in [2, 4]. The effect of threshold on extreme seat allocations is also analyzed in [3]. Finally we would like to stress that Voting Theory may sometimes be mathematically trivial but always has great political impact. This is a strong motivation for further investigations in this area.
Wirth, K; Zielinski, P; Trinter, T; Stahl, R; Mück, F; Reiser, M; Wirth, S
2016-08-01
In hospitals, the radiological services provided to non-privately insured in-house patients are mostly distributed to requesting disciplines through internal cost allocation (ICA). In many institutions, computed tomography (CT) is the modality with the largest amount of allocation credits. The aim of this work is to compare the ICA to respective DRG (Diagnosis Related Groups) shares for diagnostic CT services in a university hospital setting. The data from four CT scanners in a large university hospital were processed for the 2012 fiscal year. For each of the 50 DRG groups with the most case-mix points, all diagnostic CT services were documented including their respective amount of GOÄ allocation credits and invoiced ICA value. As the German Institute for Reimbursement of Hospitals (InEK) database groups the radiation disciplines (radiology, nuclear medicine and radiation therapy) together and also lacks any modality differentiation, the determination of the diagnostic CT component was based on the existing institutional distribution of ICA allocations. Within the included 24,854 cases, 63,062,060 GOÄ-based performance credits were counted. The ICA relieved these diagnostic CT services by € 819,029 (single credit value of 1.30 Eurocent), whereas accounting by using DRG shares would have resulted in € 1,127,591 (single credit value of 1.79 Eurocent). The GOÄ single credit value is 5.62 Eurocent. The diagnostic CT service was basically rendered as relatively inexpensive. In addition to a better financial result, changing the current ICA to DRG shares might also mean a chance for real revenues. However, the attractiveness considerably depends on how the DRG shares are distributed to the different radiation disciplines of one institution.
Design Principles and Algorithms for Air Traffic Arrival Scheduling
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Itoh, Eri
2014-01-01
This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2006-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2004-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Optimized planning methodologies of ASON implementation
NASA Astrophysics Data System (ADS)
Zhou, Michael M.; Tamil, Lakshman S.
2005-02-01
Advanced network planning concerns effective network-resource allocation for dynamic and open business environment. Planning methodologies of ASON implementation based on qualitative analysis and mathematical modeling are presented in this paper. The methodology includes method of rationalizing technology and architecture, building network and nodal models, and developing dynamic programming for multi-period deployment. The multi-layered nodal architecture proposed here can accommodate various nodal configurations for a multi-plane optical network and the network modeling presented here computes the required network elements for optimizing resource allocation.
GPU-accelerated algorithms for many-particle continuous-time quantum walks
NASA Astrophysics Data System (ADS)
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
Davis, Ashley E; Mehrotra, Sanjay; Kilambi, Vikram; Kang, Joseph; McElroy, Lisa; Lapin, Brittany; Holl, Jane; Abecassis, Michael; Friedewald, John J; Ladner, Daniela P
2014-08-07
The Statewide Sharing variance to the national kidney allocation policy allocates kidneys not used within the procuring donor service area (DSA), first within the state, before the kidneys are offered regionally and nationally. Tennessee and Florida implemented this variance. Known geographic differences exist between the 58 DSAs, in direct violation of the Final Rule stipulated by the US Department of Health and Human Services. This study examined the effect of Statewide Sharing on geographic allocation disparity over time between DSAs within Tennessee and Florida and compared them with geographic disparity between the DSAs within a state for all states with more than one DSA (California, New York, North Carolina, Ohio, Pennsylvania, Texas, and Wisconsin). A retrospective analysis from 1987 to 2009 was conducted using Organ Procurement and Transplant Network data. Five previously used indicators for geographic allocation disparity were applied: deceased-donor kidney transplant rates, waiting time to transplantation, cumulative dialysis time at transplantation, 5-year graft survival, and cold ischemic time. Transplant rates, waiting time, dialysis time, and graft survival varied greatly between deceased-donor kidney recipients in DSAs in all states in 1987. After implementation of Statewide Sharing in 1992, disparity indicators decreased by 41%, 36%, 31%, and 9%, respectively, in Tennessee and by 28%, 62%, 34%, and 19%, respectively in Florida, such that the geographic allocation disparity in Tennessee and Florida almost completely disappeared. Statewide kidney allocations incurred 7.5 and 5 fewer hours of cold ischemic time in Tennessee and Florida, respectively. Geographic disparity between DSAs in all the other states worsened or improved to a lesser degree. As sweeping changes to the kidney allocation system are being discussed to alleviate geographic disparity--changes that are untested run the risk of unintended consequences--more limited changes, such as Statewide Sharing, should be further studied and considered. Copyright © 2014 by the American Society of Nephrology.
NASA Astrophysics Data System (ADS)
Delorit, Justin; Cristian Gonzalez Ortuya, Edmundo; Block, Paul
2017-09-01
In many semi-arid regions, multisectoral demands often stress available water supplies. Such is the case in the Elqui River valley of northern Chile, which draws on a limited-capacity reservoir to allocate 25 000 water rights. Delayed infrastructure investment forces water managers to address demand-based allocation strategies, particularly in dry years, which are realized through reductions in the volume associated with each water right. Skillful season-ahead streamflow forecasts have the potential to inform managers with an indication of future conditions to guide reservoir allocations. This work evaluates season-ahead statistical prediction models of October-January (growing season) streamflow at multiple lead times associated with manager and user decision points, and links predictions with a reservoir allocation tool. Skillful results (streamflow forecasts outperform climatology) are produced for short lead times (1 September: ranked probability skill score (RPSS) of 0.31, categorical hit skill score of 61 %). At longer lead times, climatological skill exceeds forecast skill due to fewer observations of precipitation. However, coupling the 1 September statistical forecast model with a sea surface temperature phase and strength statistical model allows for equally skillful categorical streamflow forecasts to be produced for a 1 May lead, triggered for 60 % of years (1950-2015), suggesting forecasts need not be strictly deterministic to be useful for water rights holders. An early (1 May) categorical indication of expected conditions is reinforced with a deterministic forecast (1 September) as more observations of local variables become available. The reservoir allocation model is skillful at the 1 September lead (categorical hit skill score of 53 %); skill improves to 79 % when categorical allocation prediction certainty exceeds 80 %. This result implies that allocation efficiency may improve when forecasts are integrated into reservoir decision frameworks. The methods applied here advance the understanding of the mechanisms and timing responsible for moisture transport to the Elqui Valley and provide a unique application of streamflow forecasting in the prediction of water right allocations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... account in adjusting earnings and profits for the taxable year. (3) If the tax determined under... thereunder. After the tax previously determined has been ascertained, a recomputation must then be made to... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Computation of tax where cooperative redeems...
NASA Astrophysics Data System (ADS)
Menshikh, V.; Samorokovskiy, A.; Avsentev, O.
2018-03-01
The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.
Dexter, Franklin; Blake, John T; Penning, Donald H; Sloan, Brian; Chung, Patricia; Lubarsky, David A
2002-03-01
Administrators at hospitals with a fixed annual budget may want to focus surgical services on priority areas to ensure its community receives the best health services possible. However, many hospitals lack the detailed managerial accounting data needed to ensure that such a change does not increase operating costs. The authors used a detailed hospital cost database to investigate by how much a change in allocations of operating room (OR) time among surgeons can increase perioperative variable costs. The authors obtained financial data for all patients who underwent outpatient or same-day admit surgery during a year. Linear programming was used to determine by how much changing the mix of surgeons can increase total variable costs while maintaining the same total hours of OR time for elective cases. Changing OR allocations among surgeons without changing total OR hours allocated will likely increase perioperative variable costs by less than 34%. If, in addition, intensive care unit hours for elective surgical cases are not increased, hospital ward occupancy is capped, and implant use is tracked and capped, perioperative costs will likely increase by less than 10%. These four variables predict 97% of the variance in total variable costs. The authors showed that changing OR allocations among surgeons without changing total OR hours allocated can increase hospital perioperative variable costs by up to approximately one third. Thus, at hospitals with fixed or nearly fixed annual budgets, allocating OR time based on an OR-based statistic such as utilization can adversely affect the hospital financially. The OR manager can reduce the potential increase in costs by considering not just OR time, but also the resulting use of hospital beds and implants.
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
... strategic allocation for the Fund from time to time based on capital markets research. The Adviser also may... modify the strategic allocation for the Fund from time to time based on capital markets research. The... further extended the time period for Commission action to February 8, 2012. On January 25, 2012, the...
A real-time 3D end-to-end augmented reality system (and its representation transformations)
NASA Astrophysics Data System (ADS)
Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois
2016-09-01
The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishiyama, Hiromichi, E-mail: hishiyam@kitasato-u.ac.jp; Satoh, Takefumi; Kawakami, Shogo
Purpose: To compare dosimetric parameters, seed migration rates, operation times, and acute toxicities of intraoperatively built custom-linked (IBCL) seeds with those of loose seeds for prostate brachytherapy. Methods and Materials: Participants were 140 patients with low or intermediate prostate cancer prospectively allocated to an IBCL seed group (n=74) or a loose seed group (n=66), using quasirandomization (allocated by week of the month). All patients underwent prostate brachytherapy using an interactive plan technique. Computed tomography and plain radiography were performed the next day and 1 month after brachytherapy. The primary endpoint was detection of a 5% difference in dose to 90% ofmore » prostate volume on postimplant computed tomography 1 month after treatment. Seed migration was defined as a seed position >1 cm from the cluster of other seeds on radiography. A seed dropped into the seminal vesicle was also defined as a migrated seed. Results: Dosimetric parameters including the primary endpoint did not differ significantly between groups, but seed migration rate was significantly lower in the IBCL seed group (0%) than in the loose seed group (55%; P<.001). Mean operation time was slightly but significantly longer in the IBCL seed group (57 min) than in the loose seed group (50 min; P<.001). No significant differences in acute toxicities were seen between groups (median follow-up, 9 months). Conclusions: This prospective quasirandomized control trial showed no dosimetric differences between IBCL seed and loose seed groups. However, a strong trend toward decreased postimplant seed migration was shown in the IBCL seed group.« less
Time Allocation of Students in Basic Clinical Clerkships in a Traditional Curriculum.
ERIC Educational Resources Information Center
Cook, Robert L.; And Others
1992-01-01
A study of medical clerkship students' time allocations found the greatest time expenditures in personal activities, then organized educational activities (rounds, conferences, lectures, chartwork, patient contact, examination study, ancillary activities, procedures, and directed study. Students slept 5.8 hours per night. Better balance of patient…
Sharing the skies: the Gemini Observatory international time allocation process
NASA Astrophysics Data System (ADS)
Margheim, Steven J.
2016-07-01
Gemini Observatory serves a diverse community of four partner countries (United States, Canada, Brazil, and Argentina), two hosts (Chile and University of Hawaii), and limited-term partnerships (currently Australia and the Republic of Korea). Observing time is available via multiple opportunities including Large and Long Pro- grams, Fast-turnaround programs, and regular semester queue programs. The slate of programs for observation each semester must be created by merging programs from these multiple, conflicting sources. This paper de- scribes the time allocation process used to schedule the overall science program for the semester, with emphasis on the International Time Allocation Committee and the software applications used.
Student Learning Time: A Literature Review. OECD Education Working Papers, No. 127
ERIC Educational Resources Information Center
Gromada, Anna; Shewbridge, Claire
2016-01-01
This paper examines student learning time as a key educational resource. It presents an overview of how different OECD countries allocate instruction time. It also develops a model to understand the effective use of allocated instruction time and examines how different OECD countries compare on this. The paper confirms the value of sufficient…
Reading time allocation strategies and working memory using rapid serial visual presentation.
Busler, Jessica N; Lazarte, Alejandro A
2017-09-01
Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce longer pauses at the ends of clauses and ends of sentences when reading texts with multiple embedded clauses. We studied if WM relates to allocation of time at end of clauses or sentences in a self-paced reading task and in 2 MW-RSVP reading conditions (Constant MW-RSVP and Paused MW-RSVP) in which the reading rate was kept constant or pauses were induced. Higher WM span readers were more affected by the restriction of time allocation in the MW-RSVP conditions. In addition, the recall of both higher and lower WM-span readers benefited from the paused MW-RSVP presentation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Underground coal mining section data
NASA Technical Reports Server (NTRS)
Gabrill, C. P.; Urie, J. T.
1981-01-01
A set of tables which display the allocation of time for ten personnel and eight pieces of underground coal mining equipment to ten function categories is provided. Data from 125 full shift time studies contained in the KETRON database was utilized as the primary source data. The KETRON activity and delay codes were mapped onto JPL equipment, personnel and function categories. Computer processing was then performed to aggregate the shift level data and generate the matrices. Additional, documented time study data were analyzed and used to supplement the KETRON databased. The source data including the number of shifts are described. Specific parameters of the mines from which there data were extracted are presented. The result of the data processing including the required JPL matrices is presented. A brief comparison with a time study analysis of continuous mining systems is presented. The procedures used for processing the source data are described.
Accuracy of medical subject heading indexing of dental survival analyses.
Layton, Danielle M; Clarke, Michael
2014-01-01
To assess the Medical Subject Headings (MeSH) indexing of articles that employed time-to-event analyses to report outcomes of dental treatment in patients. Articles published in 2008 in 50 dental journals with the highest impact factors were hand searched to identify articles reporting dental treatment outcomes over time in human subjects with time-to-event statistics (included, n = 95), without time-to-event statistics (active controls, n = 91), and all other articles (passive controls, n = 6,769). The search was systematic (kappa 0.92 for screening, 0.86 for eligibility). Outcome-, statistic- and time-related MeSH were identified, and differences in allocation between groups were analyzed with chi-square and Fischer exact statistics. The most frequently allocated MeSH for included and active control articles were "dental restoration failure" (77% and 52%, respectively) and "treatment outcome" (54% and 48%, respectively). Outcome MeSH was similar between these groups (86% and 77%, respectively) and significantly greater than passive controls (10%, P < .001). Significantly more statistical MeSH were allocated to the included articles than to the active or passive controls (67%, 15%, and 1%, respectively, P < .001). Sixty-nine included articles specifically used Kaplan-Meier or life table analyses, but only 42% (n = 29) were indexed as such. Significantly more time-related MeSH were allocated to the included than the active controls (92% and 79%, respectively, P = .02), or to the passive controls (22%, P < .001). MeSH allocation within MEDLINE to time-to-event dental articles was inaccurate and inconsistent. Statistical MeSH were omitted from 30% of the included articles and incorrectly allocated to 15% of active controls. Such errors adversely impact search accuracy.
Elastic Extension of a CMS Computing Centre Resources on External Clouds
NASA Astrophysics Data System (ADS)
Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.
2016-10-01
After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.
LSST Resources for the Community
NASA Astrophysics Data System (ADS)
Jones, R. Lynne
2011-01-01
LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Forouzanfar, Fateme; Ebrahimnejad, Sadoullah
2013-07-01
This paper considers a single-sourcing network design problem for a three-level supply chain. For the first time, a novel mathematical model is presented considering risk-pooling, the inventory existence at distribution centers (DCs) under demand uncertainty, the existence of several alternatives to transport the product between facilities, and routing of vehicles from distribution centers to customer in a stochastic supply chain system, simultaneously. This problem is formulated as a bi-objective stochastic mixed-integer nonlinear programming model. The aim of this model is to determine the number of located distribution centers, their locations, and capacity levels, and allocating customers to distribution centers and distribution centers to suppliers. It also determines the inventory control decisions on the amount of ordered products and the amount of safety stocks at each opened DC, selecting a type of vehicle for transportation. Moreover, it determines routing decisions, such as determination of vehicles' routes starting from an opened distribution center to serve its allocated customers and returning to that distribution center. All are done in a way that the total system cost and the total transportation time are minimized. The Lingo software is used to solve the presented model. The computational results are illustrated in this paper.
Zhang, Jianhua; Yin, Zhong; Wang, Rubin
2017-01-01
This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.
Self-Regulated Reading in Adulthood
Stine-Morrow, Elizabeth A. L.; Soederberg Miller, Lisa M.; Gagne, Danielle D.; Hertzog, Christopher
2008-01-01
Younger and older adults read a series of passages of three different genres for an immediate assessment of text memory (measured by recall and true-false questions). Word-by-word reading times were measured and decomposed into components reflecting resource allocation to particular linguistic processes using regression. Allocation to word and textbase processes showed some consistency across the three text types and was predictive of memory performance. Older adults allocated more time to word and textbase processes than the young did, but showed enhanced contextual facilitation. Structural equation modeling showed that greater resource allocation to word processes was required among readers with relatively low working memory spans and poorer verbal ability, and that greater resource allocation to textbase processes was engendered by higher verbal ability. Results are discussed in terms of a model of self-regulated language processing suggesting that older readers may compensate for processing deficiencies through greater reliance on discourse context and on increases in resource allocation that are enabled through growth in crystallized ability. PMID:18361662
Political model of social evolution
Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin
2011-01-01
Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved. PMID:22198760
Political model of social evolution.
Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin
2011-12-27
Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved.
2010-06-01
must be considered when forces are (notionally) allocated . The model in this thesis will attempt to show the amount of time each person in the 2...Command and Control organization will allocate to this mission. This thesis then intends to demonstrate that an organizational structure that...Indian Ocean. Focusing on this geographic area helps to frame the structure of the Department of Defense forces that monitor, assess, allocate
McIntosh, Catherine; Dexter, Franklin; Epstein, Richard H
2006-12-01
In this tutorial, we consider the impact of operating room (OR) management on anesthesia group and OR labor productivity and costs. Most of the tutorial focuses on the steps required for each facility to refine its OR allocations using its own data collected during patient care. Data from a hospital in Australia are used throughout to illustrate the methods. OR allocation is a two-stage process. During the initial tactical stage of allocating OR time, OR capacity ("block time") is adjusted. For operational decision-making on a shorter-term basis, the existing workload can be considered fixed. Staffing is matched to that workload based on maximizing the efficiency of use of OR time. Scheduling cases and making decisions on the day of surgery to increase OR efficiency are worthwhile interventions to increase anesthesia group productivity. However, by far, the most important step is the appropriate refinement of OR allocations (i.e., planning service-specific staffing) 2-3 mo before the day of surgery. Reducing surgical and/or turnover times and delays in first-case-of-the-day starts generally provides small reductions in OR labor costs. Results vary widely because they are highly sensitive both to the OR allocations (i.e., staffing) and to the appropriateness of those OR allocations.
Umasunthar, T; Procktor, A; Hodes, M; Smith, J G; Gore, C; Cox, H E; Marrs, T; Hanna, H; Phillips, K; Pinto, C; Turner, P J; Warner, J O; Boyle, R J
2015-07-01
Previous work has shown patients commonly misuse adrenaline autoinjectors (AAI). It is unclear whether this is due to inadequate training, or poor device design. We undertook a prospective randomized controlled trial to evaluate ability to administer adrenaline using different AAI devices. We allocated mothers of food-allergic children prescribed an AAI for the first time to Anapen or EpiPen using a computer-generated randomization list, with optimal training according to manufacturer's instructions. After one year, participants were randomly allocated a new device (EpiPen, Anapen, new EpiPen, JEXT or Auvi-Q), without device-specific training. We assessed ability to deliver adrenaline using their AAI in a simulated anaphylaxis scenario six weeks and one year after initial training, and following device switch. Primary outcome was successful adrenaline administration at six weeks, assessed by an independent expert. Secondary outcomes were success at one year, success after switching device, and adverse events. We randomized 158 participants. At six weeks, 30 of 71 (42%) participants allocated to Anapen and 31 of 73 (43%) participants allocated to EpiPen were successful - RR 1.00 (95% CI 0.68-1.46). Success rates at one year were also similar, but digital injection was more common at one year with EpiPen (8/59, 14%) than Anapen (0/51, 0%, P = 0.007). When switched to a new device without specific training, success rates were higher with Auvi-Q (26/28, 93%) than other devices (39/80, 49%; P < 0.001). AAI device design is a major determinant of successful adrenaline administration. Success rates were low with several devices, but were high using the audio-prompt device Auvi-Q. © 2015 The Authors Allergy Published by John Wiley & Sons Ltd.
Yousefi, Milad; Yousefi, Moslem; Ferreira, Ricardo Poley Martins; Kim, Joong Hoon; Fogliatto, Flavio S
2018-01-01
Long length of stay and overcrowding in emergency departments (EDs) are two common problems in the healthcare industry. To decrease the average length of stay (ALOS) and tackle overcrowding, numerous resources, including the number of doctors, nurses and receptionists need to be adjusted, while a number of constraints are to be considered at the same time. In this study, an efficient method based on agent-based simulation, machine learning and the genetic algorithm (GA) is presented to determine optimum resource allocation in emergency departments. GA can effectively explore the entire domain of all 19 variables and identify the optimum resource allocation through evolution and mimicking the survival of the fittest concept. A chaotic mutation operator is used in this study to boost GA performance. A model of the system needs to be run several thousand times through the GA evolution process to evaluate each solution, hence the process is computationally expensive. To overcome this drawback, a robust metamodel is initially constructed based on an agent-based system simulation. The simulation exhibits ED performance with various resource allocations and trains the metamodel. The metamodel is created with an ensemble of the adaptive neuro-fuzzy inference system (ANFIS), feedforward neural network (FFNN) and recurrent neural network (RNN) using the adaptive boosting (AdaBoost) ensemble algorithm. The proposed GA-based optimization approach is tested in a public ED, and it is shown to decrease the ALOS in this ED case study by 14%. Additionally, the proposed metamodel shows a 26.6% improvement compared to the average results of ANFIS, FFNN and RNN in terms of mean absolute percentage error (MAPE). Copyright © 2017 Elsevier B.V. All rights reserved.
2010-01-01
Background The district resource allocation formula in Malawi was recently reviewed to include stunting as a proxy measure of socioeconomic status. In many countries where the concept of need has been incorporated in resource allocation, composite indicators of socioeconomic status have been used. In the Malawi case, it is important to ascertain whether there are differences between using single variable or composite indicators of socioeconomic status in allocations made to districts, holding all other factors in the resource allocation formula constant. Methods Principal components analysis was used to calculate asset indices for all districts from variables that capture living standards using data from the Malawi Multiple Indicator Cluster Survey 2006. These were normalized and used to weight district populations. District proportions of national population weighted by both the simple and composite indicators were then calculated for all districts and compared. District allocations were also calculated using the two approaches and compared. Results The two types of indicators are highly correlated, with a spearman rank correlation coefficient of 0.97 at the 1% level of significance. For 21 out of the 26 districts included in the study, proportions of national population weighted by the simple indicator are higher by an average of 0.6 percentage points. For the remaining 5 districts, district proportions of national population weighted by the composite indicator are higher by an average of 2 percentage points. Though the average percentage point differences are low and the actual allocations using both approaches highly correlated (ρ of 0.96), differences in actual allocations exceed 10% for 8 districts and have an average of 4.2% for the remaining 17. For 21 districts allocations based on the single variable indicator are higher. Conclusions Variations in district allocations made using either the simple or composite indicators of socioeconomic status are not statistically different to recommend one over the other. However, the single variable indicator is favourable for its ease of computation. PMID:20053274
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing.
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of a CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of graphics processing unit (GPU). We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation as well as the memory access mechanism are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU for resolving the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which enables to leverage the advantages of GPU platforms harmoniously, and yield significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
I/O-aware bandwidth allocation for petascale computing systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhou; Yang, Xu; Zhao, Dongfang
In the Big Data era, the gap between the storage performance and an appli- cation's I/O requirement is increasing. I/O congestion caused by concurrent storage accesses from multiple applications is inevitable and severely harms the performance. Conventional approaches either focus on optimizing an ap- plication's access pattern individually or handle I/O requests on a low-level storage layer without any knowledge from the upper-level applications. In this paper, we present a novel I/O-aware bandwidth allocation framework to coordinate ongoing I/O requests on petascale computing systems. The motivation behind this innovation is that the resource management system has a holistic view ofmore » both the system state and jobs' activities and can dy- namically control the jobs' status or allocate resource on the y during their execution. We treat a job's I/O requests as periodical subjobs within its lifecycle and transform the I/O congestion issue into a classical scheduling problem. Based on this model, we propose a bandwidth management mech- anism as an extension to the existing scheduling system. We design several bandwidth allocation policies with different optimization objectives either on user-oriented metrics or system performance. We conduct extensive trace- based simulations using real job traces and I/O traces from a production IBM Blue Gene/Q system at Argonne National Laboratory. Experimental results demonstrate that our new design can improve job performance by more than 30%, as well as increasing system performance.« less
Pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Wu, Shaochuan; Tan, Xuezhi
2007-11-01
By analyzing all kinds of address configuration algorithms, this paper provides a new pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks. Based on PRDAC, the first node that initials this network randomly chooses a nonlinear shift register that can generates an m-sequence. When another node joins this network, the initial node will act as an IP address configuration sever to compute an IP address according to this nonlinear shift register, and then allocates this address and tell the generator polynomial of this shift register to this new node. By this means, when other node joins this network, any node that has obtained an IP address can act as a server to allocate address to this new node. PRDAC can also efficiently avoid IP conflicts and deal with network partition and merge as same as prophet address (PA) allocation and dynamic configuration and distribution protocol (DCDP). Furthermore, PRDAC has less algorithm complexity, less computational complexity and more sufficient assumption than PA. In addition, PRDAC radically avoids address conflicts and maximizes the utilization rate of IP addresses. Analysis and simulation results show that PRDAC has rapid convergence, low overhead and immune from topological structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
Allocation of Playing Time within Team Sports--A Problem for Discussion
ERIC Educational Resources Information Center
Lorentzen, Torbjørn
2017-01-01
The background of the article is the recurrent discussion about allocation of playing time in team sports involving children and young athletes. The objective is to analyse "why" playing time is a topic for discussion among parents, coaches and athletes. The following question is addressed: Under which condition is it "fair" to…
Individual Differences in Faculty Research Time Allocations across 13 Countries
ERIC Educational Resources Information Center
Bentley, Peter James; Kyvik, Svein
2013-01-01
In research universities, research time is often too scarce to satiate the wishes of all faculty and must be allocated according to guidelines and principles. We examine self-reported research hours for full-time faculty at research universities in 13 countries (Argentina, Australia, Brazil, Canada, China, Finland, Germany, Italy, Malaysia,…
Portraits of Principal Practice: Time Allocation and School Principal Work
ERIC Educational Resources Information Center
Sebastian, James; Camburn, Eric M.; Spillane, James P.
2018-01-01
Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…
A Profile of Mathematics Instruction Time in Irish Second Level Schools
ERIC Educational Resources Information Center
Prendergast, Mark; O'Meara, Niamh
2017-01-01
Similar to counties such as the UK and Netherlands, second level schools in Ireland are free to decide how to allocate instruction time between curriculum subjects. This results in variations between the quantum of time allocated to teaching mathematics in different schools and between different class groups within the same school. This…
Dynamics of Choice: A Tutorial
ERIC Educational Resources Information Center
Baum, William M.
2010-01-01
Choice may be defined as the allocation of behavior among activities. Since all activities take up time, choice is conveniently thought of as the allocation of time among activities, even if activities like pecking are most easily measured by counting. Since dynamics refers to change through time, the dynamics of choice refers to change of…
Research on schedulers for astronomical observatories
NASA Astrophysics Data System (ADS)
Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian
2012-09-01
The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.
Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.
ERIC Educational Resources Information Center
Bhat, U. Narayan; Nance, Richard E.
The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…
Shen, Yanyan; Wang, Shuqiang; Wei, Zhiming
2014-01-01
Dynamic spectrum sharing has drawn intensive attention in cognitive radio networks. The secondary users are allowed to use the available spectrum to transmit data if the interference to the primary users is maintained at a low level. Cooperative transmission for secondary users can reduce the transmission power and thus improve the performance further. We study the joint subchannel pairing and power allocation problem in relay-based cognitive radio networks. The objective is to maximize the sum rate of the secondary user that is helped by an amplify-and-forward relay. The individual power constraints at the source and the relay, the subchannel pairing constraints, and the interference power constraints are considered. The problem under consideration is formulated as a mixed integer programming problem. By the dual decomposition method, a joint optimal subchannel pairing and power allocation algorithm is proposed. To reduce the computational complexity, two suboptimal algorithms are developed. Simulations have been conducted to verify the performance of the proposed algorithms in terms of sum rate and average running time under different conditions.
Resource Allocation in a Repetitive Project Scheduling Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Samuel, Biju; Mathew, Jeeno
2018-03-01
Resource Allocation is procedure of doling out or allocating the accessible assets in a monetary way and productive way. Resource allocation is the scheduling of the accessible assets and accessible exercises or activities required while thinking about both the asset accessibility and the total project completion time. Asset provisioning and allocation takes care of that issue by permitting the specialist co-ops to deal with the assets for every individual demand of asset. A probabilistic selection procedure has been developed in order to ensure various selections of chromosomes
Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)
1989-02-01
defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery
DOT National Transportation Integrated Search
2000-03-09
The Texas Department of Transportations (TxDOT) "smart highway" project, called TransGuide, scheduled to go on line in 1995 in San Antonio, utilizes high speed computer technology to help drivers anticipate traffic conditions-- in an effort to inc...
Hwyneeds : a sensitivity analysis
DOT National Transportation Integrated Search
2000-01-01
County highway needs identified in Iowa's Quadrennial Needs Study are used to determine the amount of funding allocated to each Iowa county for secondary highway improvements. The Iowa Department of Transportation uses a computer algorithm called HWY...
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
Preventing messaging queue deadlocks in a DMA environment
Blocksome, Michael A; Chen, Dong; Gooding, Thomas; Heidelberger, Philip; Parker, Jeff
2014-01-14
Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate and interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
On the Modeling and Management of Cloud Data Analytics
NASA Astrophysics Data System (ADS)
Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni
A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.
Information sciences experiment system
NASA Technical Reports Server (NTRS)
Katzberg, Stephen J.; Murray, Nicholas D.; Benz, Harry F.; Bowker, David E.; Hendricks, Herbert D.
1990-01-01
The rapid expansion of remote sensing capability over the last two decades will take another major leap forward with the advent of the Earth Observing System (Eos). An approach is presented that will permit experiments and demonstrations in onboard information extraction. The approach is a non-intrusive, eavesdropping mode in which a small amount of spacecraft real estate is allocated to an onboard computation resource. How such an approach allows the evaluation of advanced technology in the space environment, advanced techniques in information extraction for both Earth science and information science studies, direct to user data products, and real-time response to events, all without affecting other on-board instrumentation is discussed.
An approximate dynamic programming approach to resource management in multi-cloud scenarios
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo
2017-03-01
The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.
A coded tracking telemetry system
Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.; Amlaner, Charles J.
1989-01-01
We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
Hierarchical MFMO Circuit Modules for an Energy-Efficient SDR DBF
NASA Astrophysics Data System (ADS)
Mar, Jeich; Kuo, Chi-Cheng; Wu, Shin-Ru; Lin, You-Rong
The hierarchical multi-function matrix operation (MFMO) circuit modules are designed using coordinate rotations digital computer (CORDIC) algorithm for realizing the intensive computation of matrix operations. The paper emphasizes that the designed hierarchical MFMO circuit modules can be used to develop a power-efficient software-defined radio (SDR) digital beamformer (DBF). The formulas of the processing time for the scalable MFMO circuit modules implemented in field programmable gate array (FPGA) are derived to allocate the proper logic resources for the hardware reconfiguration. The hierarchical MFMO circuit modules are scalable to the changing number of array branches employed for the SDR DBF to achieve the purpose of power saving. The efficient reuse of the common MFMO circuit modules in the SDR DBF can also lead to energy reduction. Finally, the power dissipation and reconfiguration function in the different modes of the SDR DBF are observed from the experiment results.
McKanna, James A; Pavel, Misha; Jimison, Holly
2010-11-13
Assessment of cognitive functionality is an important aspect of care for elders. Unfortunately, few tools exist to measure divided attention, the ability to allocate attention to different aspects of tasks. An accurate determination of divided attention would allow inference of generalized cognitive decline, as well as providing a quantifiable indicator of an important component of driving skill. We propose a new method for determining relative divided attention ability through unobtrusive monitoring of computer use. Specifically, we measure performance on a dual-task cognitive computer exercise as part of a health coaching intervention. This metric indicates whether the user has the ability to pay attention to both tasks at once, or is primarily attending to one task at a time (sacrificing optimal performance). The monitoring of divided attention in a home environment is a key component of both the early detection of cognitive problems and for assessing the efficacy of coaching interventions.
Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.
Kool, Wouter; Gershman, Samuel J; Cushman, Fiery A
2017-09-01
Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.
Back pressure based multicast scheduling for fair bandwidth allocation.
Sarkar, Saswati; Tassiulas, Leandros
2005-09-01
We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.
Two-way DF relaying assisted D2D communication: ergodic rate and power allocation
NASA Astrophysics Data System (ADS)
Ni, Yiyang; Wang, Yuxi; Jin, Shi; Wong, Kai-Kit; Zhu, Hongbo
2017-12-01
In this paper, we investigate the ergodic rate for a device-to-device (D2D) communication system aided by a two-way decode-and-forward (DF) relay node. We first derive closed-form expressions for the ergodic rate of the D2D link under asymmetric and symmetric cases, respectively. We subsequently discuss two special scenarios including weak interference case and high signal-to-noise ratio case. Then we derive the tight approximations for each of the considered scenarios. Assuming that each transmitter only has access to its own statistical channel state information (CSI), we further derive closed-form power allocation strategy to improve the system performance according to the analytical results of the ergodic rate. Furthermore, some insights are provided for the power allocation strategy based on the analytical results. The strategies are easy to compute and require to know only the channel statistics. Numerical results show the accuracy of the analysis results under various conditions and test the availability of the power allocation strategy.
An improved approach of register allocation via graph coloring
NASA Astrophysics Data System (ADS)
Gao, Lei; Shi, Ce
2005-03-01
Register allocation is an important part of optimizing compiler. The algorithm of register allocation via graph coloring is implemented by Chaitin and his colleagues firstly and improved by Briggs and others. By abstracting register allocation to graph coloring, the allocation process is simplified. As the physical register number is limited, coloring of the interference graph can"t succeed for every node. The uncolored nodes must be spilled. There is an assumption that almost all the allocation method obeys: when a register is allocated to a variable v, it can"t be used by others before v quit even if v is not used for a long time. This may causes a waste of register resource. The authors relax this restriction under certain conditions and make some improvement. In this method, one register can be mapped to two or more interfered "living" live ranges at the same time if they satisfy some requirements. An operation named merge is defined which can arrange two interfered nodes occupy the same register with some cost. Thus, the resource of register can be used more effectively and the cost of memory access can be reduced greatly.
Characteristics of screen media use associated with higher BMI in young adolescents.
Bickham, David S; Blood, Emily A; Walls, Courtney E; Shrier, Lydia A; Rich, Michael
2013-05-01
This study investigates how characteristics of young adolescents' screen media use are associated with their BMI. By examining relationships between BMI and both time spent using each of 3 screen media and level of attention allocated to use, we sought to contribute to the understanding of mechanisms linking media use and obesity. We measured heights and weights of 91 13- to 15-year-olds and calculated their BMIs. Over 1 week, participants completed a weekday and a Saturday 24-hour time-use diary in which they reported the amount of time they spent using TV, computers, and video games. Participants carried handheld computers and responded to 4 to 7 random signals per day by completing onscreen questionnaires reporting activities to which they were paying primary, secondary, and tertiary attention. Higher proportions of primary attention to TV were positively associated with higher BMI. The difference between 25th and 75th percentiles of attention to TV corresponded to an estimated +2.4 BMI points. Time spent watching television was unrelated to BMI. Neither duration of use nor extent of attention paid to video games or computers was associated with BMI. These findings support the notion that attention to TV is a key element of the increased obesity risk associated with TV viewing. Mechanisms may include the influence of TV commercials on preferences for energy-dense, nutritionally questionable foods and/or eating while distracted by TV. Interventions that interrupt these processes may be effective in decreasing obesity among screen media users.
NASA Astrophysics Data System (ADS)
Prada, Jose Fernando
Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.
Simulation of Transcritical CO2 Refrigeration System with Booster Hot Gas Bypass in Tropical Climate
NASA Astrophysics Data System (ADS)
Santosa, I. D. M. C.; Sudirman; Waisnawa, IGNS; Sunu, PW; Temaja, IW
2018-01-01
A Simulation computer becomes significant important for performance analysis since there is high cost and time allocation to build an experimental rig, especially for CO2 refrigeration system. Besides, to modify the rig also need additional cos and time. One of computer program simulation that is very eligible to refrigeration system is Engineering Equation System (EES). In term of CO2 refrigeration system, environmental issues becomes priority on the refrigeration system development since the Carbon dioxide (CO2) is natural and clean refrigerant. This study aims is to analysis the EES simulation effectiveness to perform CO2 transcritical refrigeration system with booster hot gas bypass in high outdoor temperature. The research was carried out by theoretical study and numerical analysis of the refrigeration system using the EES program. Data input and simulation validation were obtained from experimental and secondary data. The result showed that the coefficient of performance (COP) decreased gradually with the outdoor temperature variation increasing. The results show the program can calculate the performance of the refrigeration system with quick running time and accurate. So, it will be significant important for the preliminary reference to improve the CO2 refrigeration system design for the hot climate temperature.
Multiresource allocation and scheduling for periodic soft real-time applications
NASA Astrophysics Data System (ADS)
Gopalan, Kartik; Chiueh, Tzi-cker
2001-12-01
Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.
Converse, Sarah J.; Shelley, Kevin J.; Morey, Steve; Chan, Jeffrey; LaTier, Andrea; Scafidi, Carolyn; Crouse, Deborah T.; Runge, Michael C.
2011-01-01
The resources available to support conservation work, whether time or money, are limited. Decision makers need methods to help them identify the optimal allocation of limited resources to meet conservation goals, and decision analysis is uniquely suited to assist with the development of such methods. In recent years, a number of case studies have been described that examine optimal conservation decisions under fiscal constraints; here we develop methods to look at other types of constraints, including limited staff and regulatory deadlines. In the US, Section Seven consultation, an important component of protection under the federal Endangered Species Act, requires that federal agencies overseeing projects consult with federal biologists to avoid jeopardizing species. A benefit of consultation is negotiation of project modifications that lessen impacts on species, so staff time allocated to consultation supports conservation. However, some offices have experienced declining staff, potentially reducing the efficacy of consultation. This is true of the US Fish and Wildlife Service's Washington Fish and Wildlife Office (WFWO) and its consultation work on federally-threatened bull trout (Salvelinus confluentus). To improve effectiveness, WFWO managers needed a tool to help allocate this work to maximize conservation benefits. We used a decision-analytic approach to score projects based on the value of staff time investment, and then identified an optimal decision rule for how scored projects would be allocated across bins, where projects in different bins received different time investments. We found that, given current staff, the optimal decision rule placed 80% of informal consultations (those where expected effects are beneficial, insignificant, or discountable) in a short bin where they would be completed without negotiating changes. The remaining 20% would be placed in a long bin, warranting an investment of seven days, including time for negotiation. For formal consultations (those where expected effects are significant), 82% of projects would be placed in a long bin, with an average time investment of 15. days. The WFWO is using this decision-support tool to help allocate staff time. Because workload allocation decisions are iterative, we describe a monitoring plan designed to increase the tool's efficacy over time. This work has general application beyond Section Seven consultation, in that it provides a framework for efficient investment of staff time in conservation when such time is limited and when regulatory deadlines prevent an unconstrained approach. ?? 2010.
Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft
NASA Technical Reports Server (NTRS)
Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas
2001-01-01
Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.
Reading Time Allocation Strategies and Working Memory Using Rapid Serial Visual Presentation
ERIC Educational Resources Information Center
Busler, Jessica N.; Lazarte, Alejandro A.
2017-01-01
Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce…
ERIC Educational Resources Information Center
Lee, Moosung; Hallinger, Philip
2012-01-01
This study examines the impact of macro-context factors on the behavior of school principals. More specifically, the article illuminates how a nation's level of economic development, societal culture, and educational system influence the amount of time principals devote to their job role and shape their allocation of time to instructional…
Meyniel, Florent; Safra, Lou; Pessiglione, Mathias
2014-01-01
A pervasive case of cost-benefit problem is how to allocate effort over time, i.e. deciding when to work and when to rest. An economic decision perspective would suggest that duration of effort is determined beforehand, depending on expected costs and benefits. However, the literature on exercise performance emphasizes that decisions are made on the fly, depending on physiological variables. Here, we propose and validate a general model of effort allocation that integrates these two views. In this model, a single variable, termed cost evidence, accumulates during effort and dissipates during rest, triggering effort cessation and resumption when reaching bounds. We assumed that such a basic mechanism could explain implicit adaptation, whereas the latent parameters (slopes and bounds) could be amenable to explicit anticipation. A series of behavioral experiments manipulating effort duration and difficulty was conducted in a total of 121 healthy humans to dissociate implicit-reactive from explicit-predictive computations. Results show 1) that effort and rest durations are adapted on the fly to variations in cost-evidence level, 2) that the cost-evidence fluctuations driving the behavior do not match explicit ratings of exhaustion, and 3) that actual difficulty impacts effort duration whereas expected difficulty impacts rest duration. Taken together, our findings suggest that cost evidence is implicitly monitored online, with an accumulation rate proportional to actual task difficulty. In contrast, cost-evidence bounds and dissipation rate might be adjusted in anticipation, depending on explicit task difficulty. PMID:24743711
Work hours affect spouse's cortisol secretion--for better and for worse.
Klumb, Petra; Hoppmann, Christiane; Staats, Melanie
2006-01-01
In a sample of 52 German dual-earner couples with at least one child under age 5, we examined the bodily costs and benefits of the amount of time each spouse spent on productive activities. Diary reports of time allocated to formal and informal work activities were analyzed according to the Actor-Partner Interdependence model. Hierarchical linear models showed that each hour an individual allocated to market, as well as household work, increased his or her total cortisol concentration (by 192 and 134 nmol/l, respectively). Unexpectedly, the time the spouse allocated to paid work also raised an individual's total cortisol concentration (by 64 nmol/l). In line with our expectations, there was a tendency for the time the spouse allocated to household work to decrease the individual's cortisol concentration (by 81 nmol/l). This study contributes to the body of evidence on the complex nature of social relationships and complements the literature on specific working conditions and couples' well-being.
Asset allocation using option-implied moments
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.; Tolos, S. M.
2017-09-01
This study uses an option-implied distribution as the input in asset allocation. The computation of risk-neutral densities (RND) are based on the Dow Jones Industrial Average (DJIA) index option and its constituents. Since the RNDs estimation does not incorporate risk premium, the conversion of RND into risk-world density (RWD) is required. The RWD is obtained through parametric calibration using the beta distributions. The mean, volatility, and covariance are then calculated to construct the portfolio. The performance of the portfolio is evaluated by using portfolio volatility and Sharpe ratio.
Distortion outage minimization in Nakagami fading using limited feedback
NASA Astrophysics Data System (ADS)
Wang, Chih-Hong; Dey, Subhrakanti
2011-12-01
We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.
Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Qianqian; Blohm, Andrew; Liu, Bo
A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoffmore » control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.« less
Reduction of Decision-Making Time in the Air Defense Management
2013-06-01
Cohen, Freeman, & Thompson, 1997), “Threat Evaluation and Weapon Allocation” ( Turan , 2012) and Evaluating the Performance of TEWA Systems (Fredrik...uses these threat values to propose weapon allocation ( Turan , 2012). Turan studied only static based weapon-target allocation. She evaluates and... Turan : - Proximity parameters (CPA, Time to CPA, CPA in units of time, time before hit, distance), - Capability parameters (target type, weapon
DNS of droplet motion in a turbulent flow
NASA Astrophysics Data System (ADS)
Rosso, Michele; Elghobashi, S.
2013-11-01
The objective of our research is to study the multi-way interactions between turbulence and vaporizing liquid droplets by performing direct numerical simulations (DNS). The freely-moving droplets are fully resolved in 3D space and time and all the relevant scales of the turbulent motion are simultaneously resolved down to the smallest length- and time-scales. Our DNS solve the unsteady three-dimensional Navier-Stokes and continuity equations throughout the whole computational domain, including the interior of the liquid droplets. The droplet surface motion and deformation are captured accurately by using the Level Set method. The pressure jump condition, density and viscosity discontinuities across the interface as well as surface tension are accounted for. Here, we present only the results of the first stage of our research which considers the effects of turbulence on the shape change of an initially spherical liquid droplet, at density ratio (of liquid to carrier fluid) of 1000, moving in isotropic turbulent flow. We validate our results via comparison with available expe. This research has been supported by NSF-CBET Award 0933085 and NSF PRAC (Petascale Computing Resource Allocation) Award.
Louisiana DOTD maintenance budget allocation system: final report.
DOT National Transportation Integrated Search
2002-11-01
This project developed a computer system to assist Louisiana Department of Transportation and Development (LA DOTD) maintenance managers in the preparation of zero-based, needs-driven annual budget plans for routine maintenance. This includes pavemen...
Radar Control Optimal Resource Allocation
2015-07-13
other tunable parameters of radars [17, 18]. Such radar resource scheduling usually demands massive computation. Even myopic 14 Distribution A: Approved...reduced validity of the optimal choice of radar resource. In the non- myopic context, the computational problem becomes exponentially more difficult...computed as t? = ασ2 q + σ r √ α q (σ + r + α q) α q2 r − 1ασ q2 + q r2 . (19) We are only interested in t? > 1 and solving the inequality we obtain the
A cognitive gateway-based spectrum sharing method in downlink round robin scheduling of LTE system
NASA Astrophysics Data System (ADS)
Deng, Hongyu; Wu, Cheng; Wang, Yiming
2017-07-01
A key technique of LTE is how to allocate efficiently the resource of radio spectrum. Traditional Round Robin (RR) scheduling scheme may lead to too many resource residues when allocating resources. When the number of users in the current transmission time interval (TTI) is not the greatest common divisor of resource block groups (RBGs), and such a phenomenon lasts for a long time, the spectrum utilization would be greatly decreased. In this paper, a novel spectrum allocation scheme of cognitive gateway (CG) was proposed, in which the LTE spectrum utilization and CG’s throughput were greatly increased by allocating idle resource blocks in the shared TTI in LTE system to CG. Our simulation results show that the spectrum resource sharing method can improve LTE spectral utilization and increase the CG’s throughput as well as network use time.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
Ding, Yan; Fei, Yang; Xu, Biao; Yang, Jun; Yan, Weirong; Diwan, Vinod K; Sauerborn, Rainer; Dong, Hengjin
2015-07-25
Studies into the costs of syndromic surveillance systems are rare, especially for estimating the direct costs involved in implementing and maintaining these systems. An Integrated Surveillance System in rural China (ISSC project), with the aim of providing an early warning system for outbreaks, was implemented; village clinics were the main surveillance units. Village doctors expressed their willingness to join in the surveillance if a proper subsidy was provided. This study aims to measure the costs of data collection by village clinics to provide a reference regarding the subsidy level required for village clinics to participate in data collection. We conducted a cross-sectional survey with a village clinic questionnaire and a staff questionnaire using a purposive sampling strategy. We tracked reported events using the ISSC internal database. Cost data included staff time, and the annual depreciation and opportunity costs of computers. We measured the village doctors' time costs for data collection by multiplying the number of full time employment equivalents devoted to the surveillance by the village doctors' annual salaries and benefits, which equaled their net incomes. We estimated the depreciation and opportunity costs of computers by calculating the equivalent annual computer cost and then allocating this to the surveillance based on the percentage usage. The estimated total annual cost of collecting data was 1,423 Chinese Renminbi (RMB) in 2012 (P25 = 857, P75 = 3284), including 1,250 RMB (P25 = 656, P75 = 3000) staff time costs and 134 RMB (P25 = 101, P75 = 335) depreciation and opportunity costs of computers. The total costs of collecting data from the village clinics for the syndromic surveillance system was calculated to be low compared with the individual net income in County A.
Ebrahimpour, Fatemeh; Sadeghi, Narges; Najafi, Mostafa; Iraj, Bijan; Shahrokhi, Akram
2015-01-01
Background: Diabetic children and their families experience high level stress because of daily insulin injection. Objectives: This study was conducted to investigate the impact of an interactive computer game on behavioral distress due to insulin injection among diabetic children. Patients and Methods: In this clinical trial, thirty children (3-12 years) with type 1 diabetes who needed daily insulin injection were recruited and allocated randomly into two groups. Children in intervention groups received an interactive computer game and asked to play at home for a week. No special intervention was done for control group. The behavioral distress of groups was assessed before, during and after the intervention by Observational Scale of Behavioral Distress–Revised (OSBD-R). Results: Repeated measure ANOVA test showed no significantly difference of OSBD-R over time for control group (P = 0.08), but this changes is signification in the study group (P = 0.001). Comparison mean score of distress were significantly different between two groups (P = 0.03). Conclusions: According to the findings, playing interactive computer game can decrease behavioral distress induced by insulin injection in type 1 diabetic children. It seems this game can be beneficial to be used alongside other interventions. PMID:26199708
Reinforcement learning for resource allocation in LEO satellite networks.
Usaha, Wipawee; Barria, Javier A
2007-06-01
In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.
ERIC Educational Resources Information Center
Krus, David J.; And Others
This paper describes a test which attempts to measure a group of personality traits by analyzing the actual behavior of the participant in a computer-simulated game. ECHO evolved from an extension and computerization of Horstein and Deutsch's allocation game. The computerized version of ECHO requires subjects to make decisions about the allocation…
A queueing model of pilot decision making in a multi-task flight management situation
NASA Technical Reports Server (NTRS)
Walden, R. S.; Rouse, W. B.
1977-01-01
Allocation of decision making responsibility between pilot and computer is considered and a flight management task, designed for the study of pilot-computer interaction, is discussed. A queueing theory model of pilot decision making in this multi-task, control and monitoring situation is presented. An experimental investigation of pilot decision making and the resulting model parameters are discussed.
ERIC Educational Resources Information Center
Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes
2008-01-01
This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…
ERIC Educational Resources Information Center
Gambari, Amosa Isiaka; Yusuf, Mudasiru Olalere
2017-01-01
This study investigated the relative effectiveness of computer-supported cooperative learning strategies on the performance, attitudes, and retention of secondary school students in physics. A purposive sampling technique was used to select four senior secondary schools from Minna, Nigeria. The students were allocated to one of four groups:…
Dexter, Franklin; Ledolter, Johannes; Wachtel, Ruth E
2005-05-01
We considered the allocation of operating room (OR) time at facilities where the strategic decision had been made to increase the number of ORs. Allocation occurs in two stages: a long-term tactical stage followed by short-term operational stage. Tactical decisions, approximately 1 yr in advance, determine what specialized equipment and expertise will be needed. Tactical decisions are based on estimates of future OR workload for each subspecialty or surgeon. We show that groups of surgeons can be excluded from consideration at this tactical stage (e.g., surgeons who need intensive care beds or those with below average contribution margins per OR hour). Lower and upper limits are estimated for the future demand of OR time by the remaining surgeons. Thus, initial OR allocations can be accomplished with only partial information on future OR workload. Once the new ORs open, operational decision-making based on OR efficiency is used to fill the OR time and adjust staffing. Surgeons who were not allocated additional time at the tactical stage are provided increased OR time through operational adjustments based on their actual workload. In a case study from a tertiary hospital, future demand estimates were needed for only 15% of surgeons, illustrating the practicality of these methods for use in tactical OR allocation decisions.
The scaling issue: scientific opportunities
NASA Astrophysics Data System (ADS)
Orbach, Raymond L.
2009-07-01
A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.
Feedback and the rationing of time and effort among competing tasks.
Northcraft, Gregory B; Schmidt, Aaron M; Ashford, Susan J
2011-09-01
The study described here tested a model of how characteristics of the feedback environment influence the allocation of resources (time and effort) among competing tasks. Results demonstrated that performers invest more resources on tasks for which higher quality (more timely and more specific) feedback is available; this effect was partially mediated by task salience and task expectancies. Feedback timing and feedback specificity demonstrated both main and interaction effects on resource allocations. Results also demonstrated that performers do better on tasks for which higher quality feedback is available; this effect was mediated by resources allocated to tasks. The practical and theoretical implications of the role of the feedback environment in managing performance are discussed. PsycINFO Database Record (c) 2011 APA, all rights reserved
ERIC Educational Resources Information Center
Kaya, S.; Kablan, Z.; Akaydin, B. B.; Demir, D.
2015-01-01
The current study examined the time spent in various types of science instruction with regard to teachers' awareness of instructional activities. The perceived effectiveness of instructional activities in relation to the allocation of time was also examined. A total of 30 4th grade teachers (17 female, 13 male), from seven different primary…
Impact of Broader Sharing on Transport Time for Deceased Donor Livers
Gentry, Sommer E; Chow, Eric KH; Wickliffe, Corey E; Massie, Allan B; Leighton, Tabitha; Segev, Dorry L
2014-01-01
Recent allocation policy changes have increased sharing of deceased donor livers across local boundaries, and sharing even broader than this has been proposed as a remedy for persistent geographic disparities in liver transplantation. It is possible that broader sharing might increase cold ischemia time (CIT) and thus harm recipients. We constructed a detailed model of transport modes (driving, helicopter, or fixed-wing) and transport times between all hospitals, and investigated the relationship between transport time and CIT for deceased donor liver transplants. Median estimated transport time for regionally shared livers was 2.0 hours compared with 1.0 hours for locally allocated livers. Model-predicted transport mode was flying for 90% of regionally shared livers but only 22% of locally allocated livers. Median CIT was 7.0 hours for regionally shared livers compared with 6.0 hours for locally allocated livers. Variation in transport time accounted for only 14.7% of the variation in CIT and, on average, transport time comprised only 21% of CIT. In conclusion, non-transport factors play a substantially larger role in CIT than does transport time. Broader sharing will have only a marginal impact on CIT but will significantly increase the fraction of transplants that are transported by flying rather than driving. PMID:24975028
Steps Toward Optimal Competitive Scheduling
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen
2006-01-01
This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.
A Critical Analysis of Time Allocation in Psychoeducational Evaluations
ERIC Educational Resources Information Center
Taub, Gordon E.; Valentine, Jennifer
2014-01-01
This study provides results form a national survey examining school psychologists' allocation of time in psychoeducational evaluations. A total of 177 participants with an average of 13.45 years professional experience in school psychology, representing 39 states, participated in the survey. The results indicate that school psychologists spend the…
Prisons and Sentencing Reform.
ERIC Educational Resources Information Center
Galvin, Jim
1983-01-01
Reviews current themes in sentencing and prison policy. The eight articles of this special issue discuss selective incapacitation, prison bed allocation models, computer-scored classification systems, race and gender relations, commutation, parole, and a historical review of sentencing reform. (JAC)
LABCON - Laboratory Job Control program
NASA Technical Reports Server (NTRS)
Reams, L. T.
1969-01-01
Computer program LABCON controls the budget system in a component test laboratory whose workload is made up from many individual budget allocations. A common denominator is applied to an incoming job, to which all effort is charged and accounted for.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
NASA Astrophysics Data System (ADS)
Gómez, Carlos M.; Tirado, Dolores; Rey-Maquieira, Javier
2004-10-01
We present a computable general equilibrium model (CGE) for the Balearic Islands, specifically performed to analyze the welfare gains associated with an improvement in the allocation of water rights through voluntary water exchanges (mainly between the agriculture and urban sectors). For the implementation of the empirical model we built the social accounting matrix (SAM) from the last available input-output table of the islands (for the year 1997). Water exchanges provide an important alternative to make the allocation of water flexible enough to cope with the cyclical droughts that characterize the natural water regime on the islands. The main conclusion is that the increased efficiency provided by ``water markets'' makes this option more advantageous than the popular alternative of building new desalinization plants. Contrary to common opinion, a ``water market'' can also have positive and significant impacts on the agricultural income.
NASA Astrophysics Data System (ADS)
Obermayer, Richard W.; Nugent, William A.
2000-11-01
The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.
Mature data transport and command management services for the Space Station
NASA Technical Reports Server (NTRS)
Carper, R. D.
1986-01-01
The duplex space/ground/space data services for the Space Station are described. The need to separate the uplink data service functions from the command functions is discussed. Command management is a process shared by an operation control center and a command management system and consists of four functions: (1) uplink data communications, (2) management of the on-board computer, (3) flight resource allocation and management, and (4) real command management. The new data service capabilities provided by microprocessors, ground and flight nodes, and closed loop and open loop capabilities are studied. The need for and functions of a flight resource allocation management service are examined. The system is designed so only users can access the system; the problems encountered with open loop uplink access are analyzed. The procedures for delivery of operational, verification, computer, and surveillance and monitoring data directly to users are reviewed.
Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment
Blocksome, Michael A [Rochester, MN; Chen, Dong [Croton On Hudson, NY; Gooding, Thomas [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Parker, Jeff [Rochester, MN
2012-02-07
Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
Mobile access to virtual randomization for investigator-initiated trials.
Deserno, Thomas M; Keszei, András P
2017-08-01
Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
High emergency organ allocation rule in lung transplantation: a simulation study.
Riou, Julien; Boëlle, Pierre-Yves; Christie, Jason D; Thabut, Gabriel
2017-10-01
The scarcity of suitable organ donors leads to protracted waiting times and mortality in patients awaiting lung transplantation. This study aims to assess the short- and long-term effects of a high emergency organ allocation policy on the outcome of lung transplantation. We developed a simulation model of lung transplantation waiting queues under two allocation strategies, based either on waiting time only or on additional criteria to prioritise the sickest patients. The model was informed by data from the United Network for Organ Sharing. We compared the impact of these strategies on waiting time, waiting list mortality and overall survival in various situations of organ scarcity. The impact of a high emergency allocation strategy depends largely on the organ supply. When organ supply is sufficient (>95 organs per 100 patients), it may prevent a small number of early deaths (1 year survival: 93.7% against 92.4% for waiting time only) without significant impact on waiting times or long-term survival. When the organ/recipient ratio is lower, the benefits in early mortality are larger but are counterbalanced by a dramatic increase of the size of the waiting list. Consequently, we observed a progressive increase of mortality on the waiting list (although still lower than with waiting time only), a deterioration of patients' condition at transplant and a decrease of post-transplant survival times. High emergency organ allocation is an effective strategy to reduce mortality on the waiting list, but causes a disruption of the list equilibrium that may have detrimental long-term effects in situations of significant organ scarcity.
High emergency organ allocation rule in lung transplantation: a simulation study
Boëlle, Pierre-Yves; Christie, Jason D.; Thabut, Gabriel
2017-01-01
The scarcity of suitable organ donors leads to protracted waiting times and mortality in patients awaiting lung transplantation. This study aims to assess the short- and long-term effects of a high emergency organ allocation policy on the outcome of lung transplantation. We developed a simulation model of lung transplantation waiting queues under two allocation strategies, based either on waiting time only or on additional criteria to prioritise the sickest patients. The model was informed by data from the United Network for Organ Sharing. We compared the impact of these strategies on waiting time, waiting list mortality and overall survival in various situations of organ scarcity. The impact of a high emergency allocation strategy depends largely on the organ supply. When organ supply is sufficient (>95 organs per 100 patients), it may prevent a small number of early deaths (1 year survival: 93.7% against 92.4% for waiting time only) without significant impact on waiting times or long-term survival. When the organ/recipient ratio is lower, the benefits in early mortality are larger but are counterbalanced by a dramatic increase of the size of the waiting list. Consequently, we observed a progressive increase of mortality on the waiting list (although still lower than with waiting time only), a deterioration of patients’ condition at transplant and a decrease of post-transplant survival times. High emergency organ allocation is an effective strategy to reduce mortality on the waiting list, but causes a disruption of the list equilibrium that may have detrimental long-term effects in situations of significant organ scarcity. PMID:29181383
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemaster, Michelle Nicole; Gay, David M.; Ehlen, Mark Andrew
2009-10-01
Staggered bioterrorist attacks with aerosolized pathogens on population centers present a formidable challenge to resource allocation and response planning. The response and planning will commence immediately after the detection of the first attack and with no or little information of the second attack. In this report, we outline a method by which resource allocation may be performed. It involves probabilistic reconstruction of the bioterrorist attack from partial observations of the outbreak, followed by an optimization-under-uncertainty approach to perform resource allocations. We consider both single-site and time-staggered multi-site attacks (i.e., a reload scenario) under conditions when resources (personnel and equipment whichmore » are difficult to gather and transport) are insufficient. Both communicable (plague) and non-communicable diseases (anthrax) are addressed, and we also consider cases when the data, the time-series of people reporting with symptoms, are confounded with a reporting delay. We demonstrate how our approach develops allocations profiles that have the potential to reduce the probability of an extremely adverse outcome in exchange for a more certain, but less adverse outcome. We explore the effect of placing limits on daily allocations. Further, since our method is data-driven, the resource allocation progressively improves as more data becomes available.« less
Allocation model for air tanker initial attack in firefighting
Francis E. Greulich; William G. O' Regan
1975-01-01
Timely and appropriate use of air tankers in firefighting can bring high returns, but their misuse can be expensive when measured in operating and other costs. An allocation model has been developed for identifying superior strategies-for air tanker initial attack, and for choosing an optimum set of allocations among airbases. Data are presented for a representative...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
Optimal offensive missile allocations for moderate offensive and defensive forces are derived and used to study their sensitivity to force structure parameters levels. It is shown that the first strike cost is a product of the number of missiles and a function of the optimum allocation. Thus, the conditions under which the number of missiles should increase or decrease in time is also determined by this allocation.
40 CFR 97.188 - CAIR NOX allowance allocations to CAIR NOX opt-in units.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false CAIR NOX allowance allocations to CAIR NOX opt-in units. 97.188 Section 97.188 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CAIR NOX Opt-In Units § 97.188 CAIR NOX allowance allocations to CAIR NOX opt-in units. (a) Timing...
40 CFR 97.188 - CAIR NOX allowance allocations to CAIR NOX opt-in units.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false CAIR NOX allowance allocations to CAIR NOX opt-in units. 97.188 Section 97.188 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CAIR NOX Opt-In Units § 97.188 CAIR NOX allowance allocations to CAIR NOX opt-in units. (a) Timing...
40 CFR 97.188 - CAIR NOX allowance allocations to CAIR NOX opt-in units.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false CAIR NOX allowance allocations to CAIR NOX opt-in units. 97.188 Section 97.188 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CAIR NOX Opt-In Units § 97.188 CAIR NOX allowance allocations to CAIR NOX opt-in units. (a) Timing...
40 CFR 97.188 - CAIR NOX allowance allocations to CAIR NOX opt-in units.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false CAIR NOX allowance allocations to CAIR NOX opt-in units. 97.188 Section 97.188 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CAIR NOX Opt-In Units § 97.188 CAIR NOX allowance allocations to CAIR NOX opt-in units. (a) Timing...
40 CFR 97.188 - CAIR NOX allowance allocations to CAIR NOX opt-in units.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false CAIR NOX allowance allocations to CAIR NOX opt-in units. 97.188 Section 97.188 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CAIR NOX Opt-In Units § 97.188 CAIR NOX allowance allocations to CAIR NOX opt-in units. (a) Timing...
Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan
2013-01-01
Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.
Survival Benefit of Lung Transplantation in the Modern Era of Lung Allocation.
Vock, David M; Durheim, Michael T; Tsuang, Wayne M; Finlen Copeland, C Ashley; Tsiatis, Anastasios A; Davidian, Marie; Neely, Megan L; Lederer, David J; Palmer, Scott M
2017-02-01
Lung transplantation is an accepted and increasingly employed treatment for advanced lung diseases, but the anticipated survival benefit of lung transplantation is poorly understood. To determine whether and for which patients lung transplantation confers a survival benefit in the modern era of U.S. lung allocation. Data on 13,040 adults listed for lung transplantation between May 2005 and September 2011 were obtained from the United Network for Organ Sharing. A structural nested accelerated failure time model was used to model the survival benefit of lung transplantation over time. The effects of patient, donor, and transplant center characteristics on the relative survival benefit of transplantation were examined. Overall, 73.8% of transplant recipients were predicted to achieve a 2-year survival benefit with lung transplantation. The survival benefit of transplantation varied by native disease group (P = 0.062), with 2-year expected benefit in 39.2 and 98.9% of transplants occurring in those with obstructive lung disease and cystic fibrosis, respectively, and by lung allocation score at the time of transplantation (P < 0.001), with net 2-year benefit in only 6.8% of transplants occurring for lung allocation score less than 32.5 and in 99.9% of transplants for lung allocation score exceeding 40. A majority of adults undergoing transplantation experience a survival benefit, with the greatest potential benefit in those with higher lung allocation scores or restrictive native lung disease or cystic fibrosis. These results provide novel information to assess the expected benefit of lung transplantation at an individual level and to enhance lung allocation policy.
Remington, David L.; Leinonen, Päivi H.; Leppälä, Johanna; Savolainen, Outi
2013-01-01
Costs of reproduction due to resource allocation trade-offs have long been recognized as key forces in life history evolution, but little is known about their functional or genetic basis. Arabidopsis lyrata, a perennial relative of the annual model plant A. thaliana with a wide climatic distribution, has populations that are strongly diverged in resource allocation. In this study, we evaluated the genetic and functional basis for variation in resource allocation in a reciprocal transplant experiment, using four A. lyrata populations and F2 progeny from a cross between North Carolina (NC) and Norway parents, which had the most divergent resource allocation patterns. Local alleles at quantitative trait loci (QTL) at a North Carolina field site increased reproductive output while reducing vegetative growth. These QTL had little overlap with flowering date QTL. Structural equation models incorporating QTL genotypes and traits indicated that resource allocation differences result primarily from QTL effects on early vegetative growth patterns, with cascading effects on later vegetative and reproductive development. At a Norway field site, North Carolina alleles at some of the same QTL regions reduced survival and reproductive output components, but these effects were not associated with resource allocation trade-offs in the Norway environment. Our results indicate that resource allocation in perennial plants may involve important adaptive mechanisms largely independent of flowering time. Moreover, the contributions of resource allocation QTL to local adaptation appear to result from their effects on developmental timing and its interaction with environmental constraints, and not from simple models of reproductive costs. PMID:23979581
Hanley, Gregory P; Tiger, Jeffrey H; Ingvarsson, Einar T; Cammilleri, Anthony P
2009-01-01
The present study evaluated the effects of classwide satiation and embedded reinforcement procedures on preschoolers' activity preferences during scheduled free-play periods. The goal of the study was to increase time allocation to originally nonpreferred, but important, activities (instructional zone, library, and science) while continuing to provide access to all free-play activities. The satiation intervention applied to preferred activities resulted in increased time allocation to the instructional and science activities, the customized embedded reinforcement interventions resulted in increased time allocation to all three target activities, and high levels of attendance to the instructional and library activities were maintained during follow-up observations. Implications for the design of preschool free-play periods are discussed.
Prospect theory reflects selective allocation of attention.
Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph
2018-02-01
There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Method, apparatus and system for managing queue operations of a test bench environment
Ostler, Farrell Lynn
2016-07-19
Techniques and mechanisms for performing dequeue operations for agents of a test bench environment. In an embodiment, a first group of agents are each allocated a respective ripe reservation and a second set of agents are each allocated a respective unripe reservation. Over time, queue management logic allocates respective reservations to agents and variously changes one or more such reservations from unripe to ripe. In another embodiment, an order of servicing agents allocated unripe reservations is based on relative priorities of the unripe reservations with respect to one another. An order of servicing agents allocated ripe reservations is on a first come, first served basis.
An Exploration of Cognitive Agility as Quantified by Attention Allocation in a Complex Environment
2017-03-01
quantified by eye-tracking data collected while subjects played a military-relevant cognitive agility computer game (Make Goal), to determine whether...subjects played a military-relevant cognitive agility computer game (Make Goal), to determine whether certain patterns are associated with effective...Group and Control Group on Eye Tracking and Game Performance .....................36 3. Comparison between High and Low Performers on Eye tracking and
78 FR 65641 - Midcontinent Independent System Operator, Inc.; Notice of Technical Conference
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... the allocation of real-time Revenue Sufficiency Guarantee (RSG) costs.\\1\\ In its order, the Commission... to discuss the issues raised by MISO's proposed revisions to its real-time RSG cost allocation... Independent System Operator, Inc.; Notice of Technical Conference By order dated October 16, 2013, in Docket...
Time Allocated to Mathematics in Post-Primary Schools in Ireland: Are We in Double Trouble?
ERIC Educational Resources Information Center
O'Meara, Niamh; Prendergast, Mark
2018-01-01
Mathematics educators and legislators worldwide have begun placing a greater emphasis on teaching mathematics for understanding and through the use of real-life applications. Revised curricula have led to the time allocated to mathematics in effected countries being scrutinised. This has resulted in policy-makers and educationalists worldwide…
Mode Transitions in Glass Cockpit Aircraft: Results of a Field Study
NASA Technical Reports Server (NTRS)
Degani, Asaf; Kirlik, Alex; Shafto, Michael (Technical Monitor)
1995-01-01
One consequence of increased levels of automation in complex control systems is the presence of modes. A mode is a particular configuration of a control system that defines how human command inputs are interpreted. In complex systems, modes also often determine a specific allocation of control authority between the human and automated systems. Even in simple static devices (e.g., electronic watches, word processors), the presence of modes has been found to cause problems in either-the acquisition or production of skilled performance. Many of these problems arise due to the fact that the selection of a mode causes device behavior to be mediated by hidden internal state information. For these simple systems, many of these interaction problems can be solved by the design of appropriate feedback to communicate internal state information to the human operator. In complex dynamic systems, however, the design issues associated with modes seem to trancend the problem of merely communicating internal state information via displayed feedback. In complex supervisory control systems (e.g., aircraft, spacecraft, military command and control), a key function of modes is the selection of a particular configuration of control authority between the human operator and automated control systems. One mode may result in full manual control, another may result in a mix of manual and automatic control, while a third may result in full automatic control over the entire system. The human operator selects an appropriate mode as a function of current goals, operating conditions, and operating procedures. Thus, the operator is put in a position of essentially trying to control two coupled dynamic systems: the target system itself, and also a highly complex suite of automation controlling the target system. From a historical perspective, it should probably not come as a surprise that very little information is available to guide the design of mode-oriented control systems. The topic of function allocation (i.e., the proper division of control authority among human and computer) has a long history in human-machine systems research. Although this research has produced some relevant guidelines, a design approach capable of defining appropriate allocations of control function between the human and automation is not yet available. As a result, the function allocation decision itself has been allocated to the operator, to be performed in real-time, in the operation of mode-oriented control systems. A variety of documented aircraft accidents and incidents suggest that the real-time selection and monitoring of control modes is a weak link in the effective operation of complex supervisory control systems. Research in human-machine systems and human-computer interaction has barely scraped the surface of the problem of understanding how operators manage this task.The purpose of this paper is to present the results of a field study which examined how operators manage mode selection in a complex supervisory control system. Data on mode engagements using the Boeing B757/767 auto-flight system were collected during approach and descent into four major airports in the East Coast of the United States. Protocols documenting mode selection, automatic mode changes, pilot actions, quantitative records of flight-path variables, and verbal reports during and after mode engagements were collected by an observer from the jumpseat. Observations were conducted on two typical trips between three airports. Each trip was be replicated 11 times, which yielded a total of 22 trips and 66 legs on which data were collected. All data collected concerned the same flight numbers, and therefore, the same time of day, same type of aircraft, and identical operational environments (e.g., ATC facilities, weather patterns, traffic flow etc.)
Scheduler for multiprocessor system switch with selective pairing
Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina
2015-01-06
System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.
Duchateau, Emmanuel; Auty, David; Mothe, Frédéric; Longuetaud, Fleur; Ung, Chhun Huor
2015-01-01
The branch autonomy principle, which states that the growth of individual branches can be predicted from their morphology and position in the forest canopy irrespective of the characteristics of the tree, has been used to simplify models of branch growth in trees. However, observed changes in allocation priority within trees towards branches growing in light-favoured conditions, referred to as ‘Milton’s Law of resource availability and allocation,’ have raised questions about the applicability of the branch autonomy principle. We present models linking knot ontogeny to the secondary growth of the main stem in black spruce (Picea mariana (Mill.) B.S.P.), which were used to assess the patterns of assimilate allocation over time, both within and between trees. Data describing the annual radial growth of 445 stem rings and the three-dimensional shape of 5,377 knots were extracted from optical scans and X-ray computed tomography images taken along the stems of 10 trees. Total knot to stem area increment ratios (KSR) were calculated for each year of growth, and statistical models were developed to describe the annual development of knot diameter and curvature as a function of stem radial increment, total tree height, stem diameter, and the position of knots along an annual growth unit. KSR varied as a function of tree age and of the height to diameter ratio of the stem, a variable indicative of the competitive status of the tree. Simulations of the development of an individual knot showed that an increase in the stem radial growth rate was associated with an increase in the initial growth of the knot, but also with a shorter lifespan. Our results provide support for ‘Milton’s Law,’ since they indicate that allocation priority is given to locations where the potential return is the highest. The developed models provided realistic simulations of knot morphology within trees, which could be integrated into a functional-structural model of tree growth and above-ground resource partitioning. PMID:25870769
Duchateau, Emmanuel; Auty, David; Mothe, Frédéric; Longuetaud, Fleur; Ung, Chhun Huor; Achim, Alexis
2015-01-01
The branch autonomy principle, which states that the growth of individual branches can be predicted from their morphology and position in the forest canopy irrespective of the characteristics of the tree, has been used to simplify models of branch growth in trees. However, observed changes in allocation priority within trees towards branches growing in light-favoured conditions, referred to as 'Milton's Law of resource availability and allocation,' have raised questions about the applicability of the branch autonomy principle. We present models linking knot ontogeny to the secondary growth of the main stem in black spruce (Picea mariana (Mill.) B.S.P.), which were used to assess the patterns of assimilate allocation over time, both within and between trees. Data describing the annual radial growth of 445 stem rings and the three-dimensional shape of 5,377 knots were extracted from optical scans and X-ray computed tomography images taken along the stems of 10 trees. Total knot to stem area increment ratios (KSR) were calculated for each year of growth, and statistical models were developed to describe the annual development of knot diameter and curvature as a function of stem radial increment, total tree height, stem diameter, and the position of knots along an annual growth unit. KSR varied as a function of tree age and of the height to diameter ratio of the stem, a variable indicative of the competitive status of the tree. Simulations of the development of an individual knot showed that an increase in the stem radial growth rate was associated with an increase in the initial growth of the knot, but also with a shorter lifespan. Our results provide support for 'Milton's Law,' since they indicate that allocation priority is given to locations where the potential return is the highest. The developed models provided realistic simulations of knot morphology within trees, which could be integrated into a functional-structural model of tree growth and above-ground resource partitioning.
The ontogeny of postmaturation resource allocation in turtles.
Bowden, R M; Paitz, Ryan T; Janzen, Fredric J
2011-01-01
Resource-allocation decisions vary with life-history strategy, and growing evidence suggests that long-lived endothermic vertebrates direct resources toward growth and self-maintenance when young, increasing allocation toward reproductive effort over time. Few studies have tracked the ontogeny of resource allocation (energy, steroid hormones, etc.) in long-lived ectothermic vertebrates, limiting our understanding of the generality of life-history strategies among vertebrates. We investigated how reproductively mature female painted turtles (Chrysemys picta) from two distinct age classes allocated resources over a 4-yr period and whether resource-allocation patterns varied with nesting experience. We examined age-related variation in body size, egg mass, reproductive frequency, and yolk steroids and report that younger females were smaller and allocated fewer resources to reproduction than did older females. Testosterone levels were higher in eggs from younger females, whereas eggs from second (seasonal) clutches contained higher concentrations of progesterone and estradiol. These allocation patterns resulted in older, larger females laying larger eggs and producing second clutches more frequently than their younger counterparts. We conclude that resource-allocation patterns do vary with age in a long-lived ectotherm.
Dynamic resource allocation in conservation planning
Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.
2011-01-01
Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Herden, Uta; Grabhorn, Enke; Briem-Richter, Andrea; Ganschow, Rainer; Nashan, Björn; Fischer, Lutz
2014-09-01
Liver allocation in the Eurotransplant (ET) region has changed from a waiting time to an urgency-based system using the model of end-stage liver disease (MELD) score in 2006. To allow timely transplantation, pediatric recipients are allocated by an assigned pediatric MELD independent of severity of illness. Consequences for children listed at our center were evaluated by retrospective analysis of all primary pediatric liver transplantation (LTX) from deceased donors between 2002 and 2010 (110 LTX before/50 LTX after new allocation). Of 50 children transplanted in the MELD era, 17 (34%) underwent LTX with a high-urgent status that was real in five patients (median lab MELD 22, waiting time five d) and assigned in 12 patients (lab MELD 7, waiting time 35 d). Thirty-three children received a liver by their assigned pediatric MELD (lab MELD 15, waiting time 255 d). Waiting time in the two periods was similar, whereas the wait-list mortality decreased (from about four children/yr to about one child/yr). One- and three-yr patient survival showed no significant difference (94.5/97.7%; p = 0.385) as did one- and three-yr graft survival (80.7/75.2%; and 86.5/82%; p = 0.436 before/after). Introduction of a MELD-based allocation system in ET with assignment of a granted score for pediatric recipients has led to a clear priorization of children resulting in a low wait-list mortality and good clinical outcome. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Mason, Chase M; Goolsby, Eric W; Davis, Kaleigh E; Bullock, Devon V; Donovan, Lisa A
2017-05-01
Trait-based plant ecology attempts to use small numbers of functional traits to predict plant ecological strategies. However, a major gap exists between our understanding of organ-level ecophysiological traits and our understanding of whole-plant fitness and environmental adaptation. In this gap lie whole-plant organizational traits, including those that describe how plant biomass is allocated among organs and the timing of plant reproduction. This study explores the role of whole-plant organizational traits in adaptation to diverse environments in the context of life history, growth form and leaf economic strategy in a well-studied herbaceous system. A phylogenetic comparative approach was used in conjunction with common garden phenotyping to assess the evolution of biomass allocation and reproductive timing across 83 populations of 27 species of the diverse genus Helianthus (the sunflowers). Broad diversity exists among species in both relative biomass allocation and reproductive timing. Early reproduction is strongly associated with resource-acquisitive leaf economic strategy, while biomass allocation is less integrated with either reproductive timing or leaf economics. Both biomass allocation and reproductive timing are strongly related to source site environmental characteristics, including length of the growing season, temperature, precipitation and soil fertility. Herbaceous taxa can adapt to diverse environments in many ways, including modulation of phenology, plant architecture and organ-level ecophysiology. Although leaf economic strategy captures one key aspect of plant physiology, on their own leaf traits are not particularly predictive of ecological strategies in Helianthus outside of the context of growth form, life history and whole-plant organization. These results highlight the importance of including data on whole-plant organization alongside organ-level ecophysiological traits when attempting to bridge the gap between functional traits and plant fitness and environmental adaptation. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Goolsby, Eric W.; Davis, Kaleigh E.; Bullock, Devon V.; Donovan, Lisa A.
2017-01-01
Abstract Background and Aims Trait-based plant ecology attempts to use small numbers of functional traits to predict plant ecological strategies. However, a major gap exists between our understanding of organ-level ecophysiological traits and our understanding of whole-plant fitness and environmental adaptation. In this gap lie whole-plant organizational traits, including those that describe how plant biomass is allocated among organs and the timing of plant reproduction. This study explores the role of whole-plant organizational traits in adaptation to diverse environments in the context of life history, growth form and leaf economic strategy in a well-studied herbaceous system. Methods A phylogenetic comparative approach was used in conjunction with common garden phenotyping to assess the evolution of biomass allocation and reproductive timing across 83 populations of 27 species of the diverse genus Helianthus (the sunflowers). Key Results Broad diversity exists among species in both relative biomass allocation and reproductive timing. Early reproduction is strongly associated with resource-acquisitive leaf economic strategy, while biomass allocation is less integrated with either reproductive timing or leaf economics. Both biomass allocation and reproductive timing are strongly related to source site environmental characteristics, including length of the growing season, temperature, precipitation and soil fertility. Conclusions Herbaceous taxa can adapt to diverse environments in many ways, including modulation of phenology, plant architecture and organ-level ecophysiology. Although leaf economic strategy captures one key aspect of plant physiology, on their own leaf traits are not particularly predictive of ecological strategies in Helianthus outside of the context of growth form, life history and whole-plant organization. These results highlight the importance of including data on whole-plant organization alongside organ-level ecophysiological traits when attempting to bridge the gap between functional traits and plant fitness and environmental adaptation. PMID:28203721
Dynamic Bandwidth Allocation with Effective Utilization of Polling Interval over WDM/TDM PON
NASA Astrophysics Data System (ADS)
Ni, Cuiping; Gan, Chaoqin; Gao, Ziyue
2014-12-01
WDM/TDM (wavelength-division multiplexing/time-division multiplexing) PON (passive optical network) appears to be an attractive solution for the next generation optical access networks. Dynamic bandwidth allocation (DBA) plays a crucial role in efficiently and fairly allocating the bandwidth among all users in WDM/TDM PON. In this paper, two dynamic bandwidth allocation schemes (DBA1 and DBA2) are proposed to eliminate the idle time of polling cycles (i.e. polling interval), improve bandwidth utilization and make full use of bandwidth resources. The two DBA schemes adjust the time slot of sending request information and make fair scheduling among users to achieve the effective utilization of polling interval in WDM/TDM PON. The simulation and theoretical analyses verify that the proposed schemes outperform the conventional DBA scheme. We also make comparisons between the two schemes in terms of bandwidth utilization and average packet delay to further demonstrate the effectiveness of the scheme of DBA2.
Characteristics of Screen Media Use Associated With Higher BMI in Young Adolescents
Blood, Emily A.; Walls, Courtney E.; Shrier, Lydia A.; Rich, Michael
2013-01-01
OBJECTIVES: This study investigates how characteristics of young adolescents’ screen media use are associated with their BMI. By examining relationships between BMI and both time spent using each of 3 screen media and level of attention allocated to use, we sought to contribute to the understanding of mechanisms linking media use and obesity. METHODS: We measured heights and weights of 91 13- to 15-year-olds and calculated their BMIs. Over 1 week, participants completed a weekday and a Saturday 24-hour time-use diary in which they reported the amount of time they spent using TV, computers, and video games. Participants carried handheld computers and responded to 4 to 7 random signals per day by completing onscreen questionnaires reporting activities to which they were paying primary, secondary, and tertiary attention. RESULTS: Higher proportions of primary attention to TV were positively associated with higher BMI. The difference between 25th and 75th percentiles of attention to TV corresponded to an estimated +2.4 BMI points. Time spent watching television was unrelated to BMI. Neither duration of use nor extent of attention paid to video games or computers was associated with BMI. CONCLUSIONS: These findings support the notion that attention to TV is a key element of the increased obesity risk associated with TV viewing. Mechanisms may include the influence of TV commercials on preferences for energy-dense, nutritionally questionable foods and/or eating while distracted by TV. Interventions that interrupt these processes may be effective in decreasing obesity among screen media users. PMID:23569098
Impact of broader sharing on the transport time for deceased donor livers.
Gentry, Sommer E; Chow, Eric K H; Wickliffe, Corey E; Massie, Allan B; Leighton, Tabitha; Segev, Dorry L
2014-10-01
Recent allocation policy changes have increased the sharing of deceased donor livers across local boundaries, and sharing even broader than this has been proposed as a remedy for persistent geographic disparities in liver transplantation. It is possible that broader sharing may increase cold ischemia times (CITs) and thus harm recipients. We constructed a detailed model of transport modes (car, helicopter, and fixed-wing aircraft) and transport times between all hospitals, and we investigated the relationship between the transport time and the CIT for deceased donor liver transplants. The median estimated transport time was 2.0 hours for regionally shared livers and 1.0 hour for locally allocated livers. The model-predicted transport mode was flying for 90% of regionally shared livers but for only 22% of locally allocated livers. The median CIT was 7.0 hours for regionally shared livers and 6.0 hours for locally allocated livers. Variation in the transport time accounted for only 14.7% of the variation in the CIT, and the transport time on average composed only 21% of the CIT. In conclusion, nontransport factors play a substantially larger role in the CIT than the transport time. Broader sharing will have only a marginal impact on the CIT but will significantly increase the fraction of transplants that are transported by flying rather than driving. © 2014 American Association for the Study of Liver Diseases.
7 CFR 1484.52 - What are the guidelines for computing the value of non-cash contributions?
Code of Federal Regulations, 2014 CFR
2014-01-01
..., claim up to the equivalent of a step 10, GS-15 for professional personnel and up to the current... value of indirect expenditures. Allocate value on the basis of sound management and accounting...
7 CFR 1484.52 - What are the guidelines for computing the value of non-cash contributions?
Code of Federal Regulations, 2013 CFR
2013-01-01
..., claim up to the equivalent of a step 10, GS-15 for professional personnel and up to the current... value of indirect expenditures. Allocate value on the basis of sound management and accounting...