The methodology for modeling queuing systems using Petri nets
NASA Astrophysics Data System (ADS)
Kotyrba, Martin; Gaj, Jakub; Tvarůžka, Matouš
2017-07-01
This papers deals with the use of Petri nets in modeling and simulation of queuing systems. The first part is focused on the explanation of basic concepts and properties of Petri nets and queuing systems. The proposed methodology for the modeling of queuing systems using Petri nets is described in the practical part. The proposed methodology will be tested on specific cases.
Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter
2010-01-01
In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
NASA Astrophysics Data System (ADS)
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-11-22
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-01-01
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. PMID:27879663
Discrete Event Simulation Models for CT Examination Queuing in West China Hospital.
Luo, Li; Liu, Hangjiang; Liao, Huchang; Tang, Shijun; Shi, Yingkang; Guo, Huili
2016-01-01
In CT examination, the emergency patients (EPs) have highest priorities in the queuing system and thus the general patients (GPs) have to wait for a long time. This leads to a low degree of satisfaction of the whole patients. The aim of this study is to improve the patients' satisfaction by designing new queuing strategies for CT examination. We divide the EPs into urgent type and emergency type and then design two queuing strategies: one is that the urgent patients (UPs) wedge into the GPs' queue with fixed interval (fixed priority model) and the other is that the patients have dynamic priorities for queuing (dynamic priority model). Based on the data from Radiology Information Database (RID) of West China Hospital (WCH), we develop some discrete event simulation models for CT examination according to the designed strategies. We compare the performance of different strategies on the basis of the simulation results. The strategy that patients have dynamic priorities for queuing makes the waiting time of GPs decrease by 13 minutes and the degree of satisfaction increase by 40.6%. We design a more reasonable CT examination queuing strategy to decrease patients' waiting time and increase their satisfaction degrees.
Discrete Event Simulation Models for CT Examination Queuing in West China Hospital
Luo, Li; Tang, Shijun; Shi, Yingkang; Guo, Huili
2016-01-01
In CT examination, the emergency patients (EPs) have highest priorities in the queuing system and thus the general patients (GPs) have to wait for a long time. This leads to a low degree of satisfaction of the whole patients. The aim of this study is to improve the patients' satisfaction by designing new queuing strategies for CT examination. We divide the EPs into urgent type and emergency type and then design two queuing strategies: one is that the urgent patients (UPs) wedge into the GPs' queue with fixed interval (fixed priority model) and the other is that the patients have dynamic priorities for queuing (dynamic priority model). Based on the data from Radiology Information Database (RID) of West China Hospital (WCH), we develop some discrete event simulation models for CT examination according to the designed strategies. We compare the performance of different strategies on the basis of the simulation results. The strategy that patients have dynamic priorities for queuing makes the waiting time of GPs decrease by 13 minutes and the degree of satisfaction increase by 40.6%. We design a more reasonable CT examination queuing strategy to decrease patients' waiting time and increase their satisfaction degrees. PMID:27547237
Optimal service using Matlab - simulink controlled Queuing system at call centers
NASA Astrophysics Data System (ADS)
Balaji, N.; Siva, E. P.; Chandrasekaran, A. D.; Tamilazhagan, V.
2018-04-01
This paper presents graphical integrated model based academic research on telephone call centres. This paper introduces an important feature of impatient customers and abandonments in the queue system. However the modern call centre is a complex socio-technical system. Queuing theory has now become a suitable application in the telecom industry to provide better online services. Through this Matlab-simulink multi queuing structured models provide better solutions in complex situations at call centres. Service performance measures analyzed at optimal level through Simulink queuing model.
Human Factors of Queuing: A Library Circulation Model.
ERIC Educational Resources Information Center
Mansfield, Jerry W.
1981-01-01
Classical queuing theories and their accompanying service facilities totally disregard the human factors in the name of efficiency. As library managers we need to be more responsive to human needs in the design of service points and make every effort to minimize queuing and queue frustration. Five references are listed. (Author/RAA)
Improving queuing service at McDonald's
NASA Astrophysics Data System (ADS)
Koh, Hock Lye; Teh, Su Yean; Wong, Chin Keat; Lim, Hooi Kie; Migin, Melissa W.
2014-07-01
Fast food restaurants are popular among price-sensitive youths and working adults who value the conducive environment and convenient services. McDonald's chains of restaurants promote their sales during lunch hours by offering package meals which are perceived to be inexpensive. These promotional lunch meals attract good response, resulting in occasional long queues and inconvenient waiting times. A study is conducted to monitor the distribution of waiting time, queue length, customer arrival and departure patterns at a McDonald's restaurant located in Kuala Lumpur. A customer survey is conducted to gauge customers' satisfaction regarding waiting time and queue length. An android app named Que is developed to perform onsite queuing analysis and report key performance indices. The queuing theory in Que is based upon the concept of Poisson distribution. In this paper, Que is utilized to perform queuing analysis at this McDonald's restaurant with the aim of improving customer service, with particular reference to reducing queuing time and shortening queue length. Some results will be presented.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY... PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 23 Highways 1 2013-04-01 2013-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY... PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY... PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 23 Highways 1 2012-04-01 2012-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY... PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of...
Study on combat effectiveness of air defense missile weapon system based on queuing theory
NASA Astrophysics Data System (ADS)
Zhao, Z. Q.; Hao, J. X.; Li, L. J.
2017-01-01
Queuing Theory is a method to analyze the combat effectiveness of air defense missile weapon system. The model of service probability based on the queuing theory was constructed, and applied to analyzing the combat effectiveness of "Sidewinder" and "Tor-M1" air defense missile weapon system. Finally aimed at different targets densities, the combat effectiveness of different combat units of two types' defense missile weapon system is calculated. This method can be used to analyze the usefulness of air defense missile weapon system.
Airport Facility Queuing Model Validation
DOT National Transportation Integrated Search
1977-05-01
Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...
NASA Astrophysics Data System (ADS)
Sun, Y.; Li, Y. P.; Huang, G. H.
2012-06-01
In this study, a queuing-theory-based interval-fuzzy robust two-stage programming (QB-IRTP) model is developed through introducing queuing theory into an interval-fuzzy robust two-stage (IRTP) optimization framework. The developed QB-IRTP model can not only address highly uncertain information for the lower and upper bounds of interval parameters but also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties when the promised targets are violated. Moreover, it can reflect uncertainties in queuing theory problems. The developed method has been applied to a case of long-term municipal solid waste (MSW) management planning. Interval solutions associated with different waste-generation rates, different waiting costs and different arriving rates have been obtained. They can be used for generating decision alternatives and thus help managers to identify desired MSW management policies under various economic objectives and system reliability constraints.
Principles of Queued Service Observing at CFHT
NASA Astrophysics Data System (ADS)
Manset, Nadine; Burdullis, T.; Devost, D.
2011-03-01
CFHT started to use Queued Service Observing in 2001, and is now operating in that mode over 95% of the time. Ten years later, the observations are now carried out by Remote Observers who are not present at the telescope (see the companion presentation "Remote Queued Service Observing at CFHT"). The next phase at CFHT will likley involve assisted or autonomous service observing (see the presentation "Artificial Intelligence in Autonomous Telescopes"), which would not be possible without first having a Queued observations system already in place. The advantages and disadvantages of QSO at CFHT will be reviewed. The principles of QSO at CFHT, which allow CFHT to complete 90-100% of the top 30-40% programs and often up to 80% of other accepted programs, will be presented, along with the strategic use of overfill programs, the method of agency balance, and the suite of planning, scheduling, analysis and data quality assessment tools available to Queue Coordinators and Remote Observers.
Queuing Theory and Reference Transactions.
ERIC Educational Resources Information Center
Terbille, Charles
1995-01-01
Examines the implications of applying the queuing theory to three different reference situations: (1) random patron arrivals; (2) random durations of transactions; and (3) use of two librarians. Tables and figures represent results from spreadsheet calculations of queues for each reference situation. (JMV)
A queuing model for road traffic simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.
Capacity utilization study for aviation security cargo inspection queuing system
NASA Astrophysics Data System (ADS)
Allgood, Glenn O.; Olama, Mohammed M.; Lake, Joe E.; Brumback, Daryl
2010-04-01
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system's ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.
Belciug, Smaranda; Gorunescu, Florin
2015-02-01
Scarce healthcare resources require carefully made policies ensuring optimal bed allocation, quality healthcare service, and adequate financial support. This paper proposes a complex analysis of the resource allocation in a hospital department by integrating in the same framework a queuing system, a compartmental model, and an evolutionary-based optimization. The queuing system shapes the flow of patients through the hospital, the compartmental model offers a feasible structure of the hospital department in accordance to the queuing characteristics, and the evolutionary paradigm provides the means to optimize the bed-occupancy management and the resource utilization using a genetic algorithm approach. The paper also focuses on a "What-if analysis" providing a flexible tool to explore the effects on the outcomes of the queuing system and resource utilization through systematic changes in the input parameters. The methodology was illustrated using a simulation based on real data collected from a geriatric department of a hospital from London, UK. In addition, the paper explores the possibility of adapting the methodology to different medical departments (surgery, stroke, and mental illness). Moreover, the paper also focuses on the practical use of the model from the healthcare point of view, by presenting a simulated application. Copyright © 2014 Elsevier Inc. All rights reserved.
Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number ofmore » cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.« less
Assessing the Queuing Process Using Data Envelopment Analysis: an Application in Health Centres.
Safdar, Komal A; Emrouznejad, Ali; Dey, Prasanta K
2016-01-01
Queuing is one of the very important criteria for assessing the performance and efficiency of any service industry, including healthcare. Data Envelopment Analysis (DEA) is one of the most widely-used techniques for performance measurement in healthcare. However, no queue management application has been reported in the health-related DEA literature. Most of the studies regarding patient flow systems had the objective of improving an already existing Appointment System. The current study presents a novel application of DEA for assessing the queuing process at an Outpatients' department of a large public hospital in a developing country where appointment systems do not exist. The main aim of the current study is to demonstrate the usefulness of DEA modelling in the evaluation of a queue system. The patient flow pathway considered for this study consists of two stages; consultation with a doctor and pharmacy. The DEA results indicated that waiting times and other related queuing variables included need considerable minimisation at both stages.
NASA Astrophysics Data System (ADS)
Chowdhury, Prasun; Saha Misra, Iti
2014-10-01
Nowadays, due to increased demand for using the Broadband Wireless Access (BWA) networks in a satisfactory manner a promised Quality of Service (QoS) is required to manage the seamless transmission of the heterogeneous handoff calls. To this end, this paper proposes an improved Call Admission Control (CAC) mechanism with prioritized handoff queuing scheme that aims to reduce dropping probability of handoff calls. Handoff calls are queued when no bandwidth is available even after the allowable bandwidth degradation of the ongoing calls and get admitted into the network when an ongoing call is terminated with a higher priority than the newly originated call. An analytical Markov model for the proposed CAC mechanism is developed to analyze various performance parameters. Analytical results show that our proposed CAC with handoff queuing scheme prioritizes the handoff calls effectively and reduces dropping probability of the system by 78.57% for real-time traffic without degrading the number of failed new call attempts. This results in the increased bandwidth utilization of the network.
An application of queuing theory to waterfowl migration
Sojda, Richard S.; Cornely, John E.; Fredrickson, Leigh H.; Rizzoli, A.E.; Jakeman, A.J.
2002-01-01
There has always been great interest in the migration of waterfowl and other birds. We have applied queuing theory to modelling waterfowl migration, beginning with a prototype system for the Rocky Mountain Population of trumpeter swans (Cygnus buccinator) in Western North America. The queuing model can be classified as a D/BB/28 system, and we describe the input sources, service mechanism, and network configuration of queues and servers. The intrinsic nature of queuing theory is to represent the spatial and temporal characteristics of entities and how they move, are placed in queues, and are serviced. The service mechanism in our system is an algorithm representing how swans move through the flyway based on seasonal life cycle events. The system uses an observed number of swans at each of 27 areas for a breeding season as input and simulates their distribution through four seasonal steps. The result is a simulated distribution of birds for the subsequent year's breeding season. The model was built as a multiagent system with one agent handling movement algorithms, with one facilitating user interface, and with one to seven agents representing specific geographic areas for which swan management interventions can be implemented. The many parallels in queuing model servers and service mechanisms with waterfowl management areas and annual life cycle events made the transfer of the theory to practical application straightforward.
Application of queuing model in Dubai's busiest megaplex
NASA Astrophysics Data System (ADS)
Bhagchandani, Maneesha; Bajpai, Priti
2013-09-01
This paper provides a study and analysis of the extremely busy booking counters at the Megaplex in Dubai using the queuing model and simulation. Dubai is an emirate in UAE with a multicultural population. Majority of the population in Dubai is foreign born. Cinema is one of the major forms of entertainment. There are more than 13 megaplexes each with a number of screens ranging from 3 to 22. They screen movies in English, Arabic, Hindi and other languages. It has been observed that during the weekends megaplexes attract a large number of crowd resulting in long queues at the booking counters. One of the busiest megaplex was selected for the study. Queuing theory satisfies the model when tested in real time situation. The concepts of arrival rate, service rate, utilization rate, waiting time in the system, average number of people in the queue, using Little's Theorem and M/M/s queuing model along with simulation software have been used to suggest an empirical solution. The aim of the paper is twofold-To assess the present situation at the Megaplex and give recommendations to optimize the use of booking counters.
Redesign of a university hospital preanesthesia evaluation clinic using a queuing theory approach.
Zonderland, Maartje E; Boer, Fredrik; Boucherie, Richard J; de Roode, Annemiek; van Kleef, Jack W
2009-11-01
Changes in patient length of stay (the duration of 1 clinic visit) as a result of the introduction of an electronic patient file system forced an anesthesia department to change its outpatient clinic organization. In this study, we sought to demonstrate how the involvement of essential employees combined with mathematical techniques to support the decision-making process resulted in a successful intervention. The setting is the preanesthesia evaluation clinic (PAC) of a university hospital, where patients consult several medical professionals, either by walk-in or appointment. Queuing theory was used to model the initial set-up of the clinic, and later to model possible alternative designs. With the queuing model, possible improvements in efficiency could be investigated. Inputs to the model were patient arrival rates and expected service times with clinic employees, collected from the clinic's logging system and by observation. The performance measures calculated with the model were patient length of stay and employee utilization rate. Supported by the model outcomes, a working group consisting of representatives of all clinic employees decided whether the initial design should be maintained or an intervention was needed. The queuing model predicted that 3 of the proposed alternatives would result in better performance. Key points in the intervention were the rescheduling of appointments and the reallocation of tasks. The intervention resulted in a shortening of the time the anesthesiologist needed to decide upon approving the patient for surgery. Patient arrivals increased sharply over 1 yr by more than 16%; however, patient length of stay at the clinic remained essentially unchanged. If the initial set-up of the clinic would have been maintained, the patient length of stay would have increased dramatically. Queuing theory provides robust methods to evaluate alternative designs for the organization of PACs. In this article, we show that queuing modeling is an adequate approach for redesigning processes in PACs.
Optimization of airport security process
NASA Astrophysics Data System (ADS)
Wei, Jianan
2017-05-01
In order to facilitate passenger travel, on the basis of ensuring public safety, the airport security process and scheduling to optimize. The stochastic Petri net is used to simulate the single channel security process, draw the reachable graph, construct the homogeneous Markov chain to realize the performance analysis of the security process network, and find the bottleneck to limit the passenger throughput. Curve changes in the flow of passengers to open a security channel for the initial state. When the passenger arrives at a rate that exceeds the processing capacity of the security channel, it is queued. The passenger reaches the acceptable threshold of the queuing time as the time to open or close the next channel, simulate the number of dynamic security channel scheduling to reduce the passenger queuing time.
Metastability of Queuing Networks with Mobile Servers
NASA Astrophysics Data System (ADS)
Baccelli, F.; Rybko, A.; Shlosman, S.; Vladimirov, A.
2018-04-01
We study symmetric queuing networks with moving servers and FIFO service discipline. The mean-field limit dynamics demonstrates unexpected behavior which we attribute to the metastability phenomenon. Large enough finite symmetric networks on regular graphs are proved to be transient for arbitrarily small inflow rates. However, the limiting non-linear Markov process possesses at least two stationary solutions. The proof of transience is based on martingale techniques.
Belciug, Smaranda; Gorunescu, Florin
2016-03-01
Explore how efficient intelligent decision support systems, both easily understandable and straightforwardly implemented, can help modern hospital managers to optimize both bed occupancy and utilization costs. This paper proposes a hybrid genetic algorithm-queuing multi-compartment model for the patient flow in hospitals. A finite capacity queuing model with phase-type service distribution is combined with a compartmental model, and an associated cost model is set up. An evolutionary-based approach is used for enhancing the ability to optimize both bed management and associated costs. In addition, a "What-if analysis" shows how changing the model parameters could improve performance while controlling costs. The study uses bed-occupancy data collected at the Department of Geriatric Medicine - St. George's Hospital, London, period 1969-1984, and January 2000. The hybrid model revealed that a bed-occupancy exceeding 91%, implying a patient rejection rate around 1.1%, can be carried out with 159 beds plus 8 unstaffed beds. The same holding and penalty costs, but significantly different bed allocations (156 vs. 184 staffed beds, and 8 vs. 9 unstaffed beds, respectively) will result in significantly different costs (£755 vs. £1172). Moreover, once the arrival rate exceeds 7 patient/day, the costs associated to the finite capacity system become significantly smaller than those associated to an Erlang B queuing model (£134 vs. £947). Encoding the whole information provided by both the queuing system and the cost model through chromosomes, the genetic algorithm represents an efficient tool in optimizing the bed allocation and associated costs. The methodology can be extended to different medical departments with minor modifications in structure and parameterization. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling and performance analysis of QoS data
NASA Astrophysics Data System (ADS)
Strzeciwilk, Dariusz; Zuberek, Włodzimierz M.
2016-09-01
The article presents the results of modeling and analysis of data transmission performance on systems that support quality of service. Models are designed and tested, taking into account multiservice network architecture, i.e. supporting the transmission of data related to different classes of traffic. Studied were mechanisms of traffic shaping systems, which are based on the Priority Queuing with an integrated source of data and the various sources of data that is generated. Discussed were the basic problems of the architecture supporting QoS and queuing systems. Designed and built were models based on Petri nets, supported by temporal logics. The use of simulation tools was to verify the mechanisms of shaping traffic with the applied queuing algorithms. It is shown that temporal models of Petri nets can be effectively used in the modeling and analysis of the performance of computer networks.
Bandwidth Allocation to Interactive Users in DBS-Based Hybrid Internet
1998-01-01
policies 12 3.1 Framework for queuing analysis: ON/OFF source traffic model . 13 3.2 Service quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14...minimizing the queuing delay. In consequence, we were interested in ob- taining improvements in the service quality , as perceived by the users. A...the service quality as per- ceived by users. The merit of this approach, first introduced in [8], is the ability to capture the characteristics of the
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS INDIAN RESERVATION ROAD BRIDGE PROGRAM § 661.43 Can other sources of funds be...
MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics aremore » performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.« less
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Some queuing network models of computer systems
NASA Technical Reports Server (NTRS)
Herndon, E. S.
1980-01-01
Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.
Delay-aware adaptive sleep mechanism for green wireless-optical broadband access networks
NASA Astrophysics Data System (ADS)
Wang, Ruyan; Liang, Alei; Wu, Dapeng; Wu, Dalei
2017-07-01
Wireless-Optical Broadband Access Network (WOBAN) is capacity-high, reliable, flexible, and ubiquitous, as it takes full advantage of the merits from both optical communication and wireless communication technologies. Similar to other access networks, the high energy consumption poses a great challenge for building up WOBANs. To shot this problem, we can make some load-light Optical Network Units (ONUs) sleep to reduce the energy consumption. Such operation, however, causes the increased packet delay. Jointly considering the energy consumption and transmission delay, we propose a delay-aware adaptive sleep mechanism. Specifically, we develop a new analytical method to evaluate the transmission delay and queuing delay over the optical part, instead of adopting M/M/1 queuing model. Meanwhile, we also analyze the access delay and queuing delay of the wireless part. Based on such developed delay models, we mathematically derive ONU's optimal sleep time. In addition, we provide numerous simulation results to show the effectiveness of the proposed mechanism.
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
Khalid, Ruzelan; Nawawi, Mohd Kamal M; Kawsar, Luthful A; Ghani, Noraida A; Kamil, Anton A; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.
OSLG: A new granting scheme in WDM Ethernet passive optical networks
NASA Astrophysics Data System (ADS)
Razmkhah, Ali; Rahbar, Akbar Ghaffarpour
2011-12-01
Several granting schemes have been proposed to grant transmission window and dynamic bandwidth allocation (DBA) in passive optical networks (PON). Generally, granting schemes suffer from bandwidth wastage of granted windows. Here, we propose a new granting scheme for WDM Ethernet PONs, called optical network unit (ONU) Side Limited Granting (OSLG) that conserves upstream bandwidth, thus resulting in decreasing queuing delay and packet drop ratio. In OSLG instead of optical line terminal (OLT), each ONU determines its transmission window. Two OSLG algorithms are proposed in this paper: the OSLG_GA algorithm that determines the size of its transmission window in such a way that the bandwidth wastage problem is relieved, and the OSLG_SC algorithm that saves unused bandwidth for more bandwidth utilization later on. The OSLG can be used as granting scheme of any DBA to provide better performance in the terms of packet drop ratio and queuing delay. Our performance evaluations show the effectiveness of OSLG in reducing packet drop ratio and queuing delay under different DBA techniques.
NASA Astrophysics Data System (ADS)
Kikuchi, Takahiro; Kubo, Ryogo
2016-08-01
In energy-efficient passive optical network (PON) systems, the increase in the queuing delays caused by the power-saving mechanism of optical network units (ONUs) is an important issue. Some researchers have proposed quality-of-service (QoS)-aware ONU cyclic sleep controllers in PON systems. We have proposed proportional (P) and proportional-derivative (PD)-based controllers to maintain the average queuing delay at a constant level regardless of the amount of downstream traffic. However, sufficient performance has not been obtained because of the sleep period limitation. In this paper, proportional-integral (PI) and proportional-integral-derivative (PID)-based controllers considering the sleep period limitation, i.e., using an anti-windup (AW) technique, are proposed to improve both the QoS and power-saving performance. Simulations confirm that the proposed controllers provide better performance than conventional controllers in terms of the average downstream queuing delay and the time occupancy of ONU active periods.
Using queuing theory and simulation model to optimize hospital pharmacy performance.
Bahadori, Mohammadkarim; Mohammadnejhad, Seyed Mohsen; Ravangard, Ramin; Teymourzadeh, Ehsan
2014-03-01
Hospital pharmacy is responsible for controlling and monitoring the medication use process and ensures the timely access to safe, effective and economical use of drugs and medicines for patients and hospital staff. This study aimed to optimize the management of studied outpatient pharmacy by developing suitable queuing theory and simulation technique. A descriptive-analytical study conducted in a military hospital in Iran, Tehran in 2013. A sample of 220 patients referred to the outpatient pharmacy of the hospital in two shifts, morning and evening, was selected to collect the necessary data to determine the arrival rate, service rate, and other data needed to calculate the patients flow and queuing network performance variables. After the initial analysis of collected data using the software SPSS 18, the pharmacy queuing network performance indicators were calculated for both shifts. Then, based on collected data and to provide appropriate solutions, the queuing system of current situation for both shifts was modeled and simulated using the software ARENA 12 and 4 scenarios were explored. Results showed that the queue characteristics of the studied pharmacy during the situation analysis were very undesirable in both morning and evening shifts. The average numbers of patients in the pharmacy were 19.21 and 14.66 in the morning and evening, respectively. The average times spent in the system by clients were 39 minutes in the morning and 35 minutes in the evening. The system utilization in the morning and evening were, respectively, 25% and 21%. The simulation results showed that reducing the staff in the morning from 2 to 1 in the receiving prescriptions stage didn't change the queue performance indicators. Increasing one staff in filling prescription drugs could cause a decrease of 10 persons in the average queue length and 18 minutes and 14 seconds in the average waiting time. On the other hand, simulation results showed that in the evening, decreasing the staff from 2 to 1 in the delivery of prescription drugs, changed the queue performance indicators very little. Increasing a staff to fill prescription drugs could cause a decrease of 5 persons in the average queue length and 8 minutes and 44 seconds in the average waiting time. The patients' waiting times and the number of patients waiting to receive services in both shifts could be reduced by using multitasking persons and reallocating them to the time-consuming stage of filling prescriptions, using queuing theory and simulation techniques.
Using Queuing Theory and Simulation Model to Optimize Hospital Pharmacy Performance
Bahadori, Mohammadkarim; Mohammadnejhad, Seyed Mohsen; Ravangard, Ramin; Teymourzadeh, Ehsan
2014-01-01
Background: Hospital pharmacy is responsible for controlling and monitoring the medication use process and ensures the timely access to safe, effective and economical use of drugs and medicines for patients and hospital staff. Objectives: This study aimed to optimize the management of studied outpatient pharmacy by developing suitable queuing theory and simulation technique. Patients and Methods: A descriptive-analytical study conducted in a military hospital in Iran, Tehran in 2013. A sample of 220 patients referred to the outpatient pharmacy of the hospital in two shifts, morning and evening, was selected to collect the necessary data to determine the arrival rate, service rate, and other data needed to calculate the patients flow and queuing network performance variables. After the initial analysis of collected data using the software SPSS 18, the pharmacy queuing network performance indicators were calculated for both shifts. Then, based on collected data and to provide appropriate solutions, the queuing system of current situation for both shifts was modeled and simulated using the software ARENA 12 and 4 scenarios were explored. Results: Results showed that the queue characteristics of the studied pharmacy during the situation analysis were very undesirable in both morning and evening shifts. The average numbers of patients in the pharmacy were 19.21 and 14.66 in the morning and evening, respectively. The average times spent in the system by clients were 39 minutes in the morning and 35 minutes in the evening. The system utilization in the morning and evening were, respectively, 25% and 21%. The simulation results showed that reducing the staff in the morning from 2 to 1 in the receiving prescriptions stage didn't change the queue performance indicators. Increasing one staff in filling prescription drugs could cause a decrease of 10 persons in the average queue length and 18 minutes and 14 seconds in the average waiting time. On the other hand, simulation results showed that in the evening, decreasing the staff from 2 to 1 in the delivery of prescription drugs, changed the queue performance indicators very little. Increasing a staff to fill prescription drugs could cause a decrease of 5 persons in the average queue length and 8 minutes and 44 seconds in the average waiting time. Conclusions: The patients' waiting times and the number of patients waiting to receive services in both shifts could be reduced by using multitasking persons and reallocating them to the time-consuming stage of filling prescriptions, using queuing theory and simulation techniques. PMID:24829791
Aviation security cargo inspection queuing simulation model for material flow and accountability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Allgood, Glenn O; Rose, Terri A
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we develop an aviation security cargo inspection queuing simulation model for material flow and accountability that will allow cargo managers to conduct impact studies of current and proposed business practices as they relate to inspection procedures, material flow, and accountability.
Using multi-class queuing network to solve performance models of e-business sites.
Zheng, Xiao-ying; Chen, De-ren
2004-01-01
Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently.
Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037
Average waiting time in FDDI networks with local priorities
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.
NASA Astrophysics Data System (ADS)
Duan, Haoran
1997-12-01
This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.
Priority Queuing on the Docket: Universality of Judicial Dispute Resolution Timing
NASA Astrophysics Data System (ADS)
Mukherjee, Satyam; Whalen, Ryan
2018-01-01
This paper analyzes court priority queuing behavior by examining the time lapse between when a case enters a court’s docket and when it is ultimately disposed of. Using data from the Supreme courts of the United States, Massachusetts, and Canada we show that each court’s docket features a slow decay with a decreasing tail. This demonstrates that, in each of the courts examined, the vast majority of cases are resolved relatively quickly, while there remains a small number of outlier cases that take an extremely long time to resolve. We discuss the implications for this on legal systems, the study of the law, and future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
Modified weighted fair queuing for packet scheduling in mobile WiMAX networks
NASA Astrophysics Data System (ADS)
Satrya, Gandeva B.; Brotoharsono, Tri
2013-03-01
The increase of user mobility and the need for data access anytime also increases the interest in broadband wireless access (BWA). The best available quality of experience for mobile data service users are assured for IEEE 802.16e based users. The main problem of assuring a high QOS value is how to allocate available resources among users in order to meet the QOS requirement for criteria such as delay, throughput, packet loss and fairness. There is no specific standard scheduling mechanism stated by IEEE standards, which leaves it for implementer differentiation. There are five QOS service classes defined by IEEE 802.16: Unsolicited Grant Scheme (UGS), Extended Real Time Polling Service (ertPS), Real Time Polling Service (rtPS), Non Real Time Polling Service (nrtPS) and Best Effort Service (BE). Each class has different QOS parameter requirements for throughput and delay/jitter constraints. This paper proposes Modified Weighted Fair Queuing (MWFQ) scheduling scenario which was based on Weighted Round Robin (WRR) and Weighted Fair Queuing (WFQ). The performance of MWFQ was assessed by using above five QoS criteria. The simulation shows that using the concept of total packet size calculation improves the network's performance.
Modeling and simulation of queuing system for customer service improvement: A case study
NASA Astrophysics Data System (ADS)
Xian, Tan Chai; Hong, Chai Weng; Hawari, Nurul Nazihah
2016-10-01
This study aims to develop a queuing model at UniMall by using discrete event simulation approach in analyzing the service performance that affects customer satisfaction. The performance measures that considered in this model are such as the average time in system, the total number of student served, the number of student in waiting queue, the waiting time in queue as well as the maximum length of buffer. ARENA simulation software is used to develop a simulation model and the output is analyzed. Based on the analysis of output, it is recommended that management of UniMall consider introducing shifts and adding another payment counter in the morning.
Application of queuing theory in inventory systems with substitution flexibility
NASA Astrophysics Data System (ADS)
Seyedhoseini, S. M.; Rashid, Reza; Kamalpour, Iman; Zangeneh, Erfan
2015-03-01
Considering the competition in today's business environment, tactical planning of a supply chain becomes more complex than before. In many multi-product inventory systems, substitution flexibility can improve profits. This paper aims to prepare a comprehensive substitution inventory model, where an inventory system with two substitute products with ignorable lead time has been considered, and effects of simultaneous ordering have been examined. In this paper, demands of customers for both of the products have been regarded as stochastic parameters, and queuing theory has been used to construct a mathematical model. The model has been coded by C++, and it has been analyzed due to a real example, where the results indicate efficiency of proposed model.
Social stability and helping in small animal societies
Field, Jeremy; Cant, Michael A.
2009-01-01
In primitively eusocial societies, all individuals can potentially reproduce independently. The key fact that we focus on in this paper is that individuals in such societies instead often queue to inherit breeding positions. Queuing leads to systematic differences in expected future fitness. We first discuss the implications this has for variation in behaviour. For example, because helpers nearer to the front of the queue have more to lose, they should work less hard to rear the dominant's offspring. However, higher rankers may be more aggressive than low rankers, even if they risk injury in the process, if aggression functions to maintain or enhance queue position. Second, we discuss how queuing rules may be enforced through hidden threats that rarely have to be carried out. In fishes, rule breakers face the threat of eviction from the group. In contrast, subordinate paper wasps are not injured or evicted during escalated challenges against the dominant, perhaps because they are more valuable to the dominant. We discuss evidence that paper-wasp dominants avoid escalated conflicts by ceding reproduction to subordinates. Queuing rules appear usually to be enforced by individuals adjacent in the queue rather than by dominants. Further manipulative studies are required to reveal mechanisms underlying queue stability and to elucidate what determines queue position in the first place. PMID:19805426
Priority Queuing Models for Hospital Intensive Care Units and Impacts to Severe Case Patients
Hagen, Matthew S.; Jopling, Jeffrey K; Buchman, Timothy G; Lee, Eva K.
2013-01-01
This paper examines several different queuing models for intensive care units (ICU) and the effects on wait times, utilization, return rates, mortalities, and number of patients served. Five separate intensive care units at an urban hospital are analyzed and distributions are fitted for arrivals and service durations. A system-based simulation model is built to capture all possible cases of patient flow after ICU admission. These include mortalities and returns before and after hospital exits. Patients are grouped into 9 different classes that are categorized by severity and length of stay (LOS). Each queuing model varies by the policies that are permitted and by the order the patients are admitted. The first set of models does not prioritize patients, but examines the advantages of smoothing the operating schedule for elective surgeries. The second set analyzes the differences between prioritizing admissions by expected LOS or patient severity. The last set permits early ICU discharges and conservative and aggressive bumping policies are contrasted. It was found that prioritizing patients by severity considerably reduced delays for critical cases, but also increased the average waiting time for all patients. Aggressive bumping significantly raised the return and mortality rates, but more conservative methods balance quality and efficiency with lowered wait times without serious consequences. PMID:24551379
Shifman, Mark A.; Sayward, Frederick G.; Mattie, Mark E.; Miller, Perry L.
2002-01-01
This case study describes a project that explores issues of quality of service (QoS) relevant to the next-generation Internet (NGI), using the PathMaster application in a testbed environment. PathMaster is a prototype computer system that analyzes digitized cell images from cytology specimens and compares those images against an image database, returning a ranked set of “similar” cell images from the database. To perform NGI testbed evaluations, we used a cluster of nine parallel computation workstations configured as three subclusters using Cisco routers. This architecture provides a local “simulated Internet” in which we explored the following QoS strategies: (1) first-in-first-out queuing, (2) priority queuing, (3) weighted fair queuing, (4) weighted random early detection, and (5) traffic shaping. The study describes the results of using these strategies with a distributed version of the PathMaster system in the presence of different amounts of competing network traffic and discusses certain of the issues that arise. The goal of the study is to help introduce NGI QoS issues to the Medical Informatics community and to use the PathMaster NGI testbed to illustrate concretely certain of the QoS issues that arise. PMID:12223501
An investigation of the impact of prolonged waiting times on blood donors in Ireland.
McKeever, T; Sweeney, M R; Staines, A
2006-02-01
The aim of this study was to investigate the impact of prolonged queuing times on blood donors, by measuring their satisfaction levels, and positive and negative affects. As donation times have increased over the past number of years within the Irish Blood Transfusion Service, this is an important issue to examine in a climate where voluntary donors are becoming scarce and demands on people's time are increasing. Eighty-five blood donors were sampled from one urban and one rural blood donor clinic. The respondents conducted a questionnaire by means of face-to-face interview, while waiting in the clinic. The questionnaire contained the Positive and Negative Affect Scale (PANAS), and a waiting satisfaction scale. Both actual and perceived waiting times of the donors were noted. Waiting time was found to be negatively related to satisfaction. Inexperienced donors expressed higher levels of negative affect than experienced donors. Urban donors were significantly more satisfied than rural donors. There was a significant difference in perceived waiting time between lone donors and those queuing in a group, with those waiting alone perceiving their wait as shorter. While all respondents stated that they intended to donate again, over one-third stated that prolonged waiting times would be their most likely deterrent. However, only 15% stated that long queuing times might actually prevent them from donating in the future, and almost all respondents said that they would recommend donation to a friend, despite long queuing times. Although our results show that the respondents were not satisfied with current waiting times, it did not seem to affect their future intentions to donate. These findings provide some optimism for the future of blood donation in Ireland, as they suggest a strong sense of commitment to donation within the population sampled. Future research could explore the application of 'the service industry' approach to waiting times to blood donation clinics.
An agent-based model for queue formation of powered two-wheelers in heterogeneous traffic
NASA Astrophysics Data System (ADS)
Lee, Tzu-Chang; Wong, K. I.
2016-11-01
This paper presents an agent-based model (ABM) for simulating the queue formation of powered two-wheelers (PTWs) in heterogeneous traffic at a signalized intersection. The main novelty is that the proposed interaction rule describing the position choice behavior of PTWs when queuing in heterogeneous traffic can capture the stochastic nature of the decision making process. The interaction rule is formulated as a multinomial logit model, which is calibrated by using a microscopic traffic trajectory dataset obtained from video footage. The ABM is validated against the survey data for the vehicular trajectory patterns, queuing patterns, queue lengths, and discharge rates. The results demonstrate that the proposed model is capable of replicating the observed queue formation process for heterogeneous traffic.
Evaluation of Job Queuing/Scheduling Software: Phase I Report
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.
A soft computing-based approach to optimise queuing-inventory control problem
NASA Astrophysics Data System (ADS)
Alaghebandha, Mohammad; Hajipour, Vahid
2015-04-01
In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.
Application of queuing theory to patient satisfaction at a tertiary hospital in Nigeria
Ameh, Nkeiruka; Sabo, B.; Oyefabi, M. O.
2013-01-01
Background: Queuing theory is the mathematical approach to the analysis of waiting lines in any setting where arrival rate of subjects is faster than the system can handle. It is applicable to healthcare settings where the systems have excess capacity to accommodate random variations. Materials and Methods: A cross-sectional descriptive survey was done. Questionnaires were administered to patients who attended the general outpatient department. Observations were also made on the queuing model and the service discipline at the clinic. Questions were meant to obtain demographic characteristics and the time spent on the queue by patients before being seen by a doctor, time spent with the doctor, their views about the time spent on the queue and useful suggestions on how to reduce the time spent on the queue. A total of 210 patients were surveyed. Results: Majority of the patients (164, 78.1%) spent 2 h or less on the queue before being seen by a doctor and less than 1 h to see the doctor. Majority of the patients (144, 68.5%) were satisfied with the time they spent on the queue before being seen by a doctor. Useful suggestions proffered by the patients to decrease the time spent on the queue before seeing a doctor at the clinic included: that more doctors be employed (46, 21.9%), that doctors should come to work on time (25, 11.9%), that first-come-first served be observed strictly (32, 15.2%) and others suggested that the records staff should desist from collecting bribes from patients in order to place their cards before others. The queuing method employed at the clinic is the multiple single channel type and the service discipline is priority service. The patients who spent less time on the queue (<1 h) before seeing the doctor were more satisfied than those who spent more time (P < 0.05). Conclusion: The study has revealed that majority of the patients were satisfied with the practice at the general outpatient department. However, there is a need to employ measures to respond to the suggestions given by the patients who are the beneficiaries of the hospital services. PMID:23661902
NASA Technical Reports Server (NTRS)
Long, Dou; Lee, David; Johnson, Jesse; Gaier, Eric; Kostiuk, Peter
1999-01-01
This report describes an integrated model of air traffic management (ATM) tools under development in two National Aeronautics and Space Administration (NASA) programs -Terminal Area Productivity (TAP) and Advanced Air Transport Technologies (AATT). The model is made by adjusting parameters of LMINET, a queuing network model of the National Airspace System (NAS), which the Logistics Management Institute (LMI) developed for NASA. Operating LMINET with models of various combinations of TAP and AATT will give quantitative information about the effects of the tools on operations of the NAS. The costs of delays under different scenarios are calculated. An extension of Air Carrier Investment Model (ACIM) under ASAC developed by the Institute for NASA maps the technologies' impacts on NASA operations into cross-comparable benefits estimates for technologies and sets of technologies.
Optimization of airport security lanes
NASA Astrophysics Data System (ADS)
Chen, Lin
2018-05-01
Current airport security management system is widely implemented all around the world to ensure the safety of passengers, but it might not be an optimum one. This paper aims to seek a better security system, which can maximize security while minimize inconvenience to passengers. Firstly, we apply Petri net model to analyze the steps where the main bottlenecks lie. Based on average tokens and time transition, the most time-consuming steps of security process can be found, including inspection of passengers' identification and documents, preparing belongings to be scanned and the process for retrieving belongings back. Then, we develop a queuing model to figure out factors affecting those time-consuming steps. As for future improvement, the effective measures which can be taken include transferring current system as single-queuing and multi-served, intelligently predicting the number of security checkpoints supposed to be opened, building up green biological convenient lanes. Furthermore, to test the theoretical results, we apply some data to stimulate the model. And the stimulation results are consistent with what we have got through modeling. Finally, we apply our queuing model to a multi-cultural background. The result suggests that by quantifying and modifying the variance in wait time, the model can be applied to individuals with various habits customs and habits. Generally speaking, our paper considers multiple affecting factors, employs several models and does plenty of calculations, which is practical and reliable for handling in reality. In addition, with more precise data available, we can further test and improve our models.
Queue theory for triangular and weibull arrival distribution models (case study of Banyumanik toll)
NASA Astrophysics Data System (ADS)
Sugito; Rahmawati, Rita; Kusuma Wardhani, Jenesia
2018-05-01
Queuing is one of the most common phenomena in daily life. Queued also happens on highway during busy time. The Electronic Toll Collection (ETC) was the new system of the Banyumanik toll gate which operates in 2014. Before ETC, Banyumanik toll gate users got regular service (regular toll gate) by paying in cash only. The ETC benefits more than regular service, but automatic toll gate (ETC) users are still few compared to regular toll gate users. To know the effectiveness of substance service, this paper used analysis of queuing system. The research was conducted at Toll Gate Banyumanik with the implementation time on 26-28 December 2016 for Ungaran-Semarang direction, and 29-31 December 2016 for Semarang- Ungaran direction. In one day, observation was done for 11 hours. That was at 07.00 a.m. until 06.00 p.m. There are 4 models of queues at Banyumanik toll gate. Here the four models will be used on the number of arrival and service time. Based on the simulation with Arena, the result showed that queue model regular toll gate in Ugaran-Semarang direction is (Tria/G/3):(GD/∞/∞) and the queue model for automatic toll gate is (G/G/3):(GD/∞/∞). While the queue model for the direction of Semarang-Ungaran regular toll gate is (G/G/3):(GD/∞/∞) and the queue model of automatic toll gate is (Weib/G/3):(GD/∞/∞).
A dynamical framework for integrated corridor management.
DOT National Transportation Integrated Search
2016-01-11
We develop analysis and control synthesis tools for dynamic traffic flow over networks. Our analysis : relies on exploiting monotonicity properties of the dynamics, and on adapting relevant tools from : stochastic queuing networks. We develop proport...
Data-driven traffic impact assessment tool for work zones.
DOT National Transportation Integrated Search
2017-03-01
Traditionally, traffic impacts of work zones have been assessed using planning software such as Quick Zone, custom spreadsheets, and others. These software programs generate delay, queuing, and other mobility measures but are difficult to validate du...
Traffic flow characteristic and capacity in intelligent work zones.
DOT National Transportation Integrated Search
2009-10-15
Intellgent transportation system (ITS) technologies are utilized to manage traffic flow and safety in : highway work zones. Traffic management plans for work zones require queuing analyses to determine : the anticipated traffic backups, but the predi...
Job Scheduling Under the Portable Batch System
NASA Technical Reports Server (NTRS)
Henderson, Robert L.; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The typical batch queuing system schedules jobs for execution by a set of queue controls. The controls determine from which queues jobs may be selected. Within the queue, jobs are ordered first-in, first-run. This limits the set of scheduling policies available to a site. The Portable Batch System removes this limitation by providing an external scheduling module. This separate program has full knowledge of the available queued jobs, running jobs, and system resource usage. Sites are able to implement any policy expressible in one of several procedural language. Policies may range from "bet fit" to "fair share" to purely political. Scheduling decisions can be made over the full set of jobs regardless of queue or order. The scheduling policy can be changed to fit a wide variety of computing environments and scheduling goals. This is demonstrated by the use of PBS on an IBM SP-2 system at NASA Ames.
Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A
2004-10-01
In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks.
Virtual Queue in a Centralized Database Environment
NASA Astrophysics Data System (ADS)
Kar, Amitava; Pal, Dibyendu Kumar
2010-10-01
Today is the era of the Internet. Every matter whether it be a gather of knowledge or planning a holiday or booking of ticket etc everything can be obtained from the internet. This paper intends to calculate the different queuing measures when some booking or purchase is done through the internet subject to the limitations in the number of tickets or seats. It involves a lot of database activities like read and write. This paper takes care of the time involved in the requests of a service, taken as arrival and the time involved in providing the required information, taken as service and thereby tries to calculate the distribution of arrival and service and the various measures of the queuing. This paper considers the database as centralized database for the sake of simplicity as the alternating concept of distributed database would rather complicate the calculation.
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory
Mousavi, Ali; Clarkson, P. John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation. PMID:27170899
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory.
Komashie, Alexander; Mousavi, Ali; Clarkson, P John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation.
Second Evaluation of Job Queuing/Scheduling Software. Phase 1
NASA Technical Reports Server (NTRS)
Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)
1997-01-01
The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.
Queuing theory models for computer networks
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.
Work-related symptoms and checkstand configuration: an experimental study.
Harber, P; Bloswick, D; Luo, J; Beck, J; Greer, D; Peña, L F
1993-07-01
Supermarket checkers are known to be at risk of upper-extremity cumulative trauma disorders. Forty-two experienced checkers checked a standard "market basket" of items on an experimental checkstand. The counter height could be adjusted (high = 35.5, low = 31.5 inches), and the pre-scan queuing area length (between conveyor belt and laser scanner) could be set to "near" or "far" lengths. Each subject scanned under the high-near, high-far, low-near, and low-far conditions in random order. Seven ordinal symptom scales were used to describe comfort. Analysis showed that both counter height and queuing length had significant effects on symptoms. Furthermore, the height of the subject affected the degree and direction of the impact of the checkstand configuration differences. The study suggests that optimization of design may be experimentally evaluated, that modification of postural as well as frequency loading may be beneficial, and that adjustability for the individual may be advisable.
Advance traffic control warning systems for maintenance operations : final report.
DOT National Transportation Integrated Search
1976-07-01
The report discusses the effect of certain variables defined by sign size, height of installation and legend on the driver responses as measured by speed, conflict and queuing parameters. Effects of electronically actuated, directional flashing signs...
Real-Time Operating System/360
NASA Technical Reports Server (NTRS)
Hoffman, R. L.; Kopp, R. S.; Mueller, H. H.; Pollan, W. D.; Van Sant, B. W.; Weiler, P. W.
1969-01-01
RTOS has a cost savings advantage for real-time applications, such as those with random inputs requiring a flexible data routing facility, display systems simplified by a device independent interface language, and complex applications needing added storage protection and data queuing.
Modeling Human Supervisory Control in Heterogeneous Unmanned Vehicle Systems
2009-02-01
events through a queue, nominally due to another queue having reached its capacity limitation (Balsamo, Persone, & Onvural, 2001; Onvural, 1990; Perros ...Communication and Coordination, Athens, Greece. Perros , H. G. (1984). Queuing Networks with Blocking: A Bibliography. ACM Sigmetrics, Performance Evaluation
L&D Manual Turn Lane Storage Validation/Update : Executive Summary Report
DOT National Transportation Integrated Search
2012-08-01
The formation of queues on a highway facility is a sign of the presence of operationally : inefficient sections of the facility. Queuing occurs at intersections in large part due to overflow : or inadequacy of turn bays, inadequate capacity, or poor ...
Standfield, L; Comans, T; Raymer, M; O'Leary, S; Moretto, N; Scuffham, P
2016-08-01
Hospital outpatient orthopaedic services traditionally rely on medical specialists to assess all new patients to determine appropriate care. This has resulted in significant delays in service provision. In response, Orthopaedic Physiotherapy Screening Clinics and Multidisciplinary Services (OPSC) have been introduced to assess and co-ordinate care for semi- and non-urgent patients. To compare the efficiency of delivering increased semi- and non-urgent orthopaedic outpatient services through: (1) additional OPSC services; (2) additional traditional orthopaedic medical services with added surgical resources (TOMS + Surg); or (3) additional TOMS without added surgical resources (TOMS - Surg). A cost-utility analysis using discrete event simulation (DES) with dynamic queuing (DQ) was used to predict the cost effectiveness, throughput, queuing times, and resource utilisation, associated with introducing additional OPSC or TOMS ± Surg versus usual care. The introduction of additional OPSC or TOMS (±surgery) would be considered cost effective in Australia. However, OPSC was the most cost-effective option. Increasing the capacity of current OPSC services is an efficient way to improve patient throughput and waiting times without exceeding current surgical resources. An OPSC capacity increase of ~100 patients per month appears cost effective (A$8546 per quality-adjusted life-year) and results in a high level of OPSC utilisation (98 %). Increasing OPSC capacity to manage semi- and non-urgent patients would be cost effective, improve throughput, and reduce waiting times without exceeding current surgical resources. Unlike Markov cohort modelling, microsimulation, or DES without DQ, employing DES-DQ in situations where capacity constraints predominate provides valuable additional information beyond cost effectiveness to guide resource allocation decisions.
L&D Manual Turn Lane Storage Validation/Update
DOT National Transportation Integrated Search
2012-08-01
Queuing occurs at intersections mostly due to overflow or inadequacy of turn bays. The ODOT L&D : Manual Volume 1 has storage requirements for both signalized and unsignalized intersections. Figures : 401-9E and 401-10E of the L&D Manual provide the ...
He, Xinhua; Hu, Wenfa
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
Spectrally queued feature selection for robotic visual odometery
NASA Astrophysics Data System (ADS)
Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike
2011-01-01
Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.
NASA Astrophysics Data System (ADS)
Motaghedi-Larijani, Arash; Aminnayeri, Majid
2017-03-01
Cross-docking is a supply-chain strategy that can reduce transportation and inventory costs. This study is motivated by a fruit and vegetable distribution centre in Tehran, which has cross-docks and a limited time to admit outbound trucks. In this article, outbound trucks are assumed to arrive at the cross-dock with a single outbound door with a uniform distribution (0,L). The total number of assigned trucks is constant and the loading time is fixed. A queuing model is modified for this situation and the expected waiting time of each customer is calculated. Then, a curve for the waiting time is calculated. Finally, the length of window time L is optimized to minimize the total cost, which includes the waiting time of the trucks and the admission cost of the cross-dock. Some illustrative examples of cross-docking are presented and solved using the proposed method.
The Queued Service Observing Project at CFHT
NASA Astrophysics Data System (ADS)
Martin, Pierre; Savalle, Renaud; Vermeulen, Tom; Shapiro, Joshua N.
2002-12-01
In order to maximize the scientific productivity of the CFH12K mosaic wide-field imager (and soon MegaCam), the Queued Service Observing (QSO) mode was implemented at the Canada-France-Hawaii Telescope at the beginning of 2001. The QSO system consists of an ensemble of software components allowing for the submission of programs, the preparation of queues, and finally the execution and evaluation of observations. The QSO project is part of a broader system known as the New Observing Process (NOP). This system includes data acquisition, data reduction and analysis through a pipeline named Elixir, and a data archiving and distribution component (DADS). In this paper, we review several technical and operational aspects of the QSO project. In particular, we present our strategy, technical architecture, program submission system, and the tools developed for the preparation and execution of the queues. Our successful experience of over 150 nights of QSO operations is also discussed along with the future plans for queue observing with MegaCam and other instruments at CFHT.
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.
exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization
Integrating Reservations and Queuing in Remote Laboratory Scheduling
ERIC Educational Resources Information Center
Lowe, D.
2013-01-01
Remote laboratories (RLs) have become increasingly seen as a useful tool in supporting flexible shared access to scarce laboratory resources. An important element in supporting shared access is coordinating the scheduling of the laboratory usage. Optimized scheduling can significantly decrease access waiting times and improve the utilization level…
Targeting LSD1 Epigenetic Signature in Castration-Recurrent Prostate Cancer
2014-10-01
Sequencing libraries have been prepared and samples are currently queued at the Genomic Core facility at Roswell Park and results will be available...results to various meetings and seminars at Roswell Park, allowing me to get useful feedback from both clinicians and researchers. Furthermore, a brief
Minimizing the Delay at Traffic Lights
ERIC Educational Resources Information Center
Van Hecke, Tanja
2009-01-01
Vehicles holding at traffic lights is a typical queuing problem. At crossings the vehicles experience delay in both directions. Longer periods with green lights in one direction are disadvantageous for the vehicles coming from the other direction. The total delay for getting through the traffic point is what counts. This article presents an…
Implementation of the Automated Numerical Model Performance Metrics System
2011-09-26
question. As of this writing, the DSRC IBM AIX machines DaVinci and Pascal, and the Cray XT Einstein all use the PBS batch queuing system for...3.3). 12 Appendix A – General Automation System This system provides general purpose tools and a general way to automatically run
Phenomena of drag reduction on saltating sediment in shallow, supercritical flows
USDA-ARS?s Scientific Manuscript database
ABSTRACT: When a group of objects move through a fluid, it often exhibits coordinated behavior in which bodies in the wake of a leader generally experience reduced drag. Locomotion provides well known examples including the maneuvering and clustering of racing automobiles and bicyclists and queuing...
Nested Fork-Join Queuing Networks and Their Application to Mobility Airfield Operations Analysis.
1997-03-01
shortest queue length. Setia , Squillante, and Tripathi [109] extend Makowski and Nelson’s work by performing a quantitative assessment of a range of...Markov chains." Numerical Solution of Markov Chains, edited by W. J. Stewart, 63- 88. Basel: Marcel Dekker, 1991. [109] Setia , S. K., and others
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... following methods: A. http://www.regulations.gov . Follow the on-line instructions for submitting comments... the equipment is in good working order, if necessary as part of the inspection; (6) idling of a... off school property during queuing for the sequential discharge or pickup of students where the...
Estimating Performance of Single Bus, Shared Memory Multiprocessors
1987-05-01
Chandy78] K.M. Chandy, C.M. Sauer, "Approximate methods for analyzing queuing network models of computing systems," Computing Surveys, vol10 , no 3...Denning78] P. Denning, J. Buzen, "The operational analysis of queueing network models", Computing Sur- veys, vol10 , no 3, September 1978, pp 225-261
2014-12-26
administrators dashboard , so that they can be effectively triaged, analyzed, and used to implement defensive actions to keep the network safe and...For the bank teller, some customers will require straight forward services (a quick deposit or cashing a check) while others will have questions or
Single stage queueing/manufacturing system model that involves emission variable
NASA Astrophysics Data System (ADS)
Murdapa, P. S.; Pujawan, I. N.; Karningsih, P. D.; Nasution, A. H.
2018-04-01
Queueing is commonly occured at every industry. The basic model of queueing theory gives a foundation for modeling a manufacturing system. Nowadays, carbon emission is an important and inevitable issue due to its huge impact to our environment. However, existing model of queuing applied for analysis of single stage manufacturing system has not taken Carbon emissions into consideration. If it is applied to manufacturing context, it may lead to improper decisisions. By taking into account of emission variables into queuing models, not only the model become more comprehensive but also it creates awareness on the issue to many parties that involves in the system. This paper discusses the single stage M/M/1 queueing model that involves emission variable. Hopefully it could be a starting point for the next more complex models. It has a main objective for determining how carbon emissions could fit into the basic queueing theory. It turned out that the involvement of emission variables into the model has modified the traditional model of a single stage queue to a calculation model of production lot quantity allowed per period.
Modeling and measurement of fault-tolerant multiprocessors
NASA Technical Reports Server (NTRS)
Shin, K. G.; Woodbury, M. H.; Lee, Y. H.
1985-01-01
The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented.
Berkeley lab checkpoint/restart (BLCR) for Linux clusters
Hargrove, Paul H.; Duell, Jason C.
2006-09-01
This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less
A Study on Coexistence Capability Evaluations of the Enhanced Channel Hopping Mechanism in WBANs
Wei, Zhongcheng; Sun, Yongmei; Ji, Yuefeng
2017-01-01
As an important coexistence technology, channel hopping can reduce the interference among Wireless Body Area Networks (WBANs). However, it simultaneously brings some issues, such as energy waste, long latency and communication interruptions, etc. In this paper, we propose an enhanced channel hopping mechanism that allows multiple WBANs coexisted in the same channel. In order to evaluate the coexistence performance, some critical metrics are designed to reflect the possibility of channel conflict. Furthermore, by taking the queuing and non-queuing behaviors into consideration, we present a set of analysis approaches to evaluate the coexistence capability. On the one hand, we present both service-dependent and service-independent analysis models to estimate the number of coexisting WBANs. On the other hand, based on the uniform distribution assumption and the additive property of Possion-stream, we put forward two approximate methods to compute the number of occupied channels. Extensive simulation results demonstrate that our estimation approaches can provide an effective solution for coexistence capability estimation. Moreover, the enhanced channel hopping mechanism can significantly improve the coexistence capability and support a larger arrival rate of WBANs. PMID:28098818
Saloma, Caesar; Perez, Gay Jane; Gavile, Catherine Ann; Ick-Joson, Jacqueline Judith; Palmes-Saloma, Cynthia
2015-01-01
We study the impact of prior individual training during group emergency evacuation using mice that escape from an enclosed water pool to a dry platform via any of two possible exits. Experimenting with mice avoids serious ethical and legal issues that arise when dealing with unwitting human participants while minimizing concerns regarding the reliability of results obtained from simulated experiments using ‘actors’. First, mice were trained separately and their individual escape times measured over several trials. Mice learned quickly to swim towards an exit–they achieved their fastest escape times within the first four trials. The trained mice were then placed together in the pool and allowed to escape. No two mice were permitted in the pool beforehand and only one could pass through an exit opening at any given time. At first trial, groups of trained mice escaped seven and five times faster than their corresponding control groups of untrained mice at pool occupancy rate ρ of 11.9% and 4%, respectively. Faster evacuation happened because trained mice: (a) had better recognition of the available pool space and took shorter escape routes to an exit, (b) were less likely to form arches that blocked an exit opening, and (c) utilized the two exits efficiently without preference. Trained groups achieved continuous egress without an apparent leader-coordinator (self-organized queuing)—a collective behavior not experienced during individual training. Queuing was unobserved in untrained groups where mice were prone to wall seeking, aimless swimming and/or blind copying that produced circuitous escape routes, biased exit use and clogging. The experiments also reveal that faster and less costly group training at ρ = 4%, yielded an average individual escape time that is comparable with individualized training. However, group training in a more crowded pool (ρ = 11.9%) produced a longer average individual escape time. PMID:25693170
Modeling bursts and heavy tails in human dynamics
NASA Astrophysics Data System (ADS)
Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László
2006-03-01
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(τw)˜τw-α with α=3/2 . The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α=1 . We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α=1 , the surface mail based communication belongs to the α=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.
Fairness in the coronary angiography queue.
Alter, D A; Basinski, A S; Cohen, E A; Naylor, C D
1999-10-05
Since waiting lists for coronary angiography are generally managed without explicit queuing criteria, patients may not receive priority on the basis of clinical acuity. The objective of this study was to examine clinical and nonclinical determinants of the length of time patients wait for coronary angiography. In this single-centre prospective cohort study conducted in the autumn of 1997, 357 consecutive patients were followed from initial triage until a coronary angiography was performed or an adverse cardiac event occurred. The referring physicians' hospital affiliation (physicians at Sunnybrook & Women's College Health Sciences Centre, those who practice at another centre but perform angiography at Sunnybrook and those with no previous association with Sunnybrook) was used to compare processes of care. A clinical urgency rating scale was used to assign a recommended maximum waiting time (RMWT) to each patient retrospectively, but this was not used in the queuing process. RMWTs and actual waiting times for patients in the 3 referral groups were compared; the influence clinical and nonclinical variables had on the actual length of time patients waited for coronary angiography was assessed; and possible predictors of adverse events were examined. Of 357 patients referred to Sunnybrook, 22 (6.2%) experienced adverse events while in the queue. Among those who remained, 308 (91.9%) were in need of coronary angiography; 201 (60.0%) of those patients received one within the RMWT. The length of time to angiography was influenced by clinical characteristics similar to those specified on the urgency rating scale, leading to a moderate agreement between actual waiting times and RMWTs (kappa = 0.53). However, physician affiliation was a highly significant (p < 0.001) and independent predictor of waiting time. Whereas 45.6% of the variation in waiting time was explained by all clinical factors combined, 9.3% of the variation was explained by physician affiliation alone. Informal queuing practices for coronary angiography do reflect clinical acuity, but they are also influenced by nonclinical factors, such as the nature of the physicians' association with the catheterization facility.
Yang, Muer; Fry, Michael J; Raikhelkar, Jayashree; Chin, Cynthia; Anyanwu, Anelechi; Brand, Jordan; Scurlock, Corey
2013-02-01
To develop queuing and simulation-based models to understand the relationship between ICU bed availability and operating room schedule to maximize the use of critical care resources and minimize case cancellation while providing equity to patients and surgeons. Retrospective analysis of 6-month unit admission data from a cohort of cardiothoracic surgical patients, to create queuing and simulation-based models of ICU bed flow. Three different admission policies (current admission policy, shortest-processing-time policy, and a dynamic policy) were then analyzed using simulation models, representing 10 yr worth of potential admissions. Important output data consisted of the "average waiting time," a proxy for unit efficiency, and the "maximum waiting time," a surrogate for patient equity. A cardiothoracic surgical ICU in a tertiary center in New York, NY. Six hundred thirty consecutive cardiothoracic surgical patients admitted to the cardiothoracic surgical ICU. None. Although the shortest-processing-time admission policy performs best in terms of unit efficiency (0.4612 days), it did so at expense of patient equity prolonging surgical waiting time by as much as 21 days. The current policy gives the greatest equity but causes inefficiency in unit bed-flow (0.5033 days). The dynamic policy performs at a level (0.4997 days) 8.3% below that of the shortest-processing-time in average waiting time; however, it balances this with greater patient equity (maximum waiting time could be shortened by 4 days compared to the current policy). Queuing theory and computer simulation can be used to model case flow through a cardiothoracic operating room and ICU. A dynamic admission policy that looks at current waiting time and expected ICU length of stay allows for increased equity between patients with only minimum losses of efficiency. This dynamic admission policy would seem to be a superior in maximizing case-flow. These results may be generalized to other surgical ICUs.
Modeling bursts and heavy tails in human dynamics.
Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László
2006-03-01
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can handle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(tau(w)) approximately tau(w)(-alpha) with alpha=3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by alpha=1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display alpha=1, the surface mail based communication belongs to the alpha=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.
ERIC Educational Resources Information Center
Allen, Frank R.; Smith, Rita H.
1993-01-01
Describes a survey that was conducted at the University of Tennessee at Knoxville library to count and categorize the types of questions coming into the reference department from telephone calls. Informational and directional calls are examined, implications for staffing are considered, and queuing theory is applied. (seven references) (LRW)
'people queued for hours to see him'.
Carlisle, Daloni
2012-05-30
I met Khalil Dale (pictured) in 1995 when writing about the International Committee of the Red Cross (ICRC) campaign to end the use of landmines. I spent a week alongside him, reporting on his work for the ICRC in northern Afghanistan. Khalil was a gentle, quiet man who left a big impression on everyone who met him.
Teaching Mathematical Modelling: Demonstrating Enrichment and Elaboration
ERIC Educational Resources Information Center
Warwick, Jon
2015-01-01
This paper uses a series of models to illustrate one of the fundamental processes of model building--that of enrichment and elaboration. The paper describes how a problem context is given which allows a series of models to be developed from a simple initial model using a queuing theory framework. The process encourages students to think about the…
A Method of Predicting Queuing at Library Online PCs
ERIC Educational Resources Information Center
Beranek, Lea G.
2006-01-01
On-campus networked personal computer (PC) usage at La Trobe University Library was surveyed during September 2005. The survey's objectives were to confirm peak usage times, to measure some of the relevant parameters of online PC usage, and to determine the effect that 24 new networked PCs had on service quality. The survey found that clients…
AIS ASM Operational Integration Plan
2013-08-01
al. | Public August 2013 This page intentionally left blank. AIS ASM Operational Integration Plan v ...that supply AIS Routers as part of their AIS shoreside network software : Kongsberg C-Scope, Gatehouse AIS, Transas AIS Network, and CNS DataSwitch...commercial systems would be suitable for the current USCG traffic conditions. The ASM Manager is software that adds the required queuing and
Development and Analysis of Models for Handling the Refrigerated Containerized Cargoes
NASA Astrophysics Data System (ADS)
Nyrkov, A.; Pavlova, L.; Nikiforov, V.; Sokolov, S.; Budnik, V.
2017-07-01
This paper considers the open multi-channel queuing system, which receives irregular homogeneous or heterogeneous applications with an unlimited flow of standby time. The system is regarded as an example of a container terminal, having conditionally functional sections with a certain duty cycle, which receives an irregular, non-uniform flow of vessels with the resultant intensity.
Affirmative Action Case Queued Up for Airing at High Court
ERIC Educational Resources Information Center
Walsh, Mark
2012-01-01
The future of affirmative action in education--not just for colleges but potentially for K-12 schools as well--may be on the line when the U.S. Supreme Court takes up a race-conscious admissions plan from the University of Texas next month. That seems apparent to the scores of education groups that have lined up behind the university with…
Cultural Geography Model Validation
2010-03-01
the Cultural Geography Model (CGM), a government owned, open source multi - agent system utilizing Bayesian networks, queuing systems, the Theory of...referent determined either from theory or SME opinion. 4. CGM Overview The CGM is a government-owned, open source, data driven multi - agent social...HSCB, validation, social network analysis ABSTRACT: In the current warfighting environment , the military needs robust modeling and simulation (M&S
Occupational Feminization and Pay: Assessing Causal Dynamics Using 1950-2000 U.S. Census Data
ERIC Educational Resources Information Center
Levanon, Asaf; England, Paula; Allison, Paul
2009-01-01
Occupations with a greater share of females pay less than those with a lower share, controlling for education and skill. This association is explained by two dominant views: devaluation and queuing. The former views the pay offered in an occupation to affect its female proportion, due to employers' preference for men--a gendered labor queue. The…
ERIC Educational Resources Information Center
Chiarini, Marc A.
2010-01-01
Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…
Modelling Pedestrian Travel Time and the Design of Facilities: A Queuing Approach
Rahman, Khalidur; Abdul Ghani, Noraida; Abdulbasah Kamil, Anton; Mustafa, Adli; Kabir Chowdhury, Md. Ahmed
2013-01-01
Pedestrian movements are the consequence of several complex and stochastic facts. The modelling of pedestrian movements and the ability to predict the travel time are useful for evaluating the performance of a pedestrian facility. However, only a few studies can be found that incorporate the design of the facility, local pedestrian body dimensions, the delay experienced by the pedestrians, and level of service to the pedestrian movements. In this paper, a queuing based analytical model is developed as a function of relevant determinants and functional factors to predict the travel time on pedestrian facilities. The model can be used to assess the overall serving rate or performance of a facility layout and correlate it to the level of service that is possible to provide the pedestrians. It has also the ability to provide a clear suggestion on the designing and sizing of pedestrian facilities. The model is empirically validated and is found to be a robust tool to understand how well a particular walking facility makes possible comfort and convenient pedestrian movements. The sensitivity analysis is also performed to see the impact of some crucial parameters of the developed model on the performance of pedestrian facilities. PMID:23691055
NASA Astrophysics Data System (ADS)
Wang, Haibo; Swee Poo, Gee
2004-08-01
We study the provisioning of virtual private network (VPN) service over WDM optical networks. For this purpose, we investigate the blocking performance of the hose model versus the pipe model for the provisioning. Two techniques are presented: an analytical queuing model and a discrete event simulation. The queuing model is developed from the multirate reduced-load approximation technique. The simulation is done with the OPNET simulator. Several experimental situations were used. The blocking probabilities calculated from the two approaches show a close match, indicating that the multirate reduced-load approximation technique is capable of predicting the blocking performance for the pipe model and the hose model in WDM networks. A comparison of the blocking behavior of the two models shows that the hose model has superior blocking performance as compared with pipe model. By and large, the blocking probability of the hose model is better than that of the pipe model by a few orders of magnitude, particularly at low load regions. The flexibility of the hose model allowing for the sharing of resources on a link among all connections accounts for its superior performance.
2002-09-01
Protocol LAN Local Area Network LDAP Lightweight Directory Access Protocol LLQ Low Latency Queuing MAC Media Access Control MarCorSysCom Marine...Description Protocol SIP Session Initiation Protocol SMTP Simple Mail Transfer Protocol SPAWAR Space and Naval Warfare Systems Center SS7 ...PSTN infrastructure previously required to carry the conversation. The cost of accessing the PSTN is thereby eliminated. In cases where Internet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, D.; Black, D.; Slimmer, D.
1994-04-01
The DART Data Flow Manager (dfm) integrates a buffer manager with a requester/provider model for scheduling work on buffers. Buffer lists, representing built events or other data, are queued by service requesters to service providers. Buffers may be either internal (reside on the local node), or external (located elsewhere, e.g., dual ported memory). Internal buffers are managed locally. Wherever possible, dfm moves only addresses of buffers rather than buffers themselves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, D.; Black, D.; Slimmer, D.
1994-12-31
The DART Data Flow Manager (dfm) integrates a buffer manager with a requester/provider model for scheduling work on buffers. Buffer lists, representing built events or other data, are queued by service requesters to service providers. Buffers may be either internal (reside on the local node), or external (located elsewhere, e.g., dual ported memory). Internal buffers are managed locally. Wherever possible, dfm moves only addresses of buffers rather than buffers themselves.
NASA Astrophysics Data System (ADS)
Buick, Otto; Falcon, Pat; Alexander, G.; Siegel, Edward Carl-Ludwig
2013-03-01
Einstein[Dover(03)] critical-slowing-down(CSD)[Pais, Subtle in The Lord; Life & Sci. of Albert Einstein(81)] is Siegel CyberWar denial-of-access(DOA) operations-research queuing theory/pinning/jamming/.../Read [Aikido, Aikibojitsu & Natural-Law(90)]/Aikido(!!!) phase-transition critical-phenomenon via Siegel DIGIT-Physics (Newcomb[Am.J.Math. 4,39(1881)]-{Planck[(1901)]-Einstein[(1905)])-Poincare[Calcul Probabilités(12)-p.313]-Weyl [Goett.Nachr.(14); Math.Ann.77,313 (16)]-{Bose[(24)-Einstein[(25)]-Fermi[(27)]-Dirac[(1927)]}-``Benford''[Proc.Am.Phil.Soc. 78,4,551 (38)]-Kac[Maths.Stat.-Reasoning(55)]-Raimi[Sci.Am. 221,109 (69)...]-Jech[preprint, PSU(95)]-Hill[Proc.AMS 123,3,887(95)]-Browne[NYT(8/98)]-Antonoff-Smith-Siegel[AMS Joint-Mtg.,S.-D.(02)] algebraic-inversion to yield ONLY BOSE-EINSTEIN QUANTUM-statistics (BEQS) with ZERO-digit Bose-Einstein CONDENSATION(BEC) ``INTERSECTION''-BECOME-UNION to Barabasi[PRL 876,5632(01); Rev.Mod.Phys.74,47(02)...] Network /Net/GRAPH(!!!)-physics BEC: Strutt/Rayleigh(1881)-Polya(21)-``Anderson''(58)-Siegel[J.Non-crystalline-Sol.40,453(80)
Spectrally Queued Feature Selection for Robotic Visual Odometery
2010-11-23
in these systems has yet to be defined. 1. INTRODUCTION 1.1 Uses of Autonomous Vehicles Autonomous vehicles have a wide range of possible...applications. In military situations, autonomous vehicles are valued for their ability to keep Soldiers far away from danger. A robot can inspect and disarm...just a glimpse of what engineers are hoping for in the future. 1.2 Biological Influence Autonomous vehicles are becoming more of a possibility in
A Network Flow Approach to the Initial Skills Training Scheduling Problem
2007-12-01
include (but are not limited to) queuing theory, stochastic analysis and simulation. After the demand schedule has been estimated, it can be ...software package has already been purchased and is in use by AFPC, AFPC has requested that the new algorithm be programmed in this language as well ...the discussed outputs from those schedules. Required Inputs A single input file details the students to be scheduled as well as the courses
Multifractal Internet Traffic Model and Active Queue Management
2003-01-01
dropped by the Adaptive RED , ssthresh decreases from 64KB to 4KB and the new con- gestion window cwnd is decreased from 8KB to 1KB (Tahoe). The situation...method to predict the queuing behavior of FIFO and RED queues. In order to satisfy a given delay and jitter requirement for real time connections, and to...5.2 Vulnerability of Adaptive RED to Web-mice . . . . . . . . . . . . . 103 5.3 A Parallel Virtual Queues Structure
NQS - NETWORK QUEUING SYSTEM, VERSION 2.0 (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Walter, H.
1994-01-01
The Network Queuing System, NQS, is a versatile batch and device queuing facility for a single Unix computer or a group of networked computers. With the Unix operating system as a common interface, the user can invoke the NQS collection of user-space programs to move batch and device jobs freely around the different computer hardware tied into the network. NQS provides facilities for remote queuing, request routing, remote status, queue status controls, batch request resource quota limits, and remote output return. This program was developed as part of an effort aimed at tying together diverse UNIX based machines into NASA's Numerical Aerodynamic Simulator Processing System Network. This revision of NQS allows for creating, deleting, adding and setting of complexes that aid in limiting the number of requests to be handled at one time. It also has improved device-oriented queues along with some revision of the displays. NQS was designed to meet the following goals: 1) Provide for the full support of both batch and device requests. 2) Support all of the resource quotas enforceable by the underlying UNIX kernel implementation that are relevant to any particular batch request and its corresponding batch queue. 3) Support remote queuing and routing of batch and device requests throughout the NQS network. 4) Support queue access restrictions through user and group access lists for all queues. 5) Enable networked output return of both output and error files to possibly remote machines. 6) Allow mapping of accounts across machine boundaries. 7) Provide friendly configuration and modification mechanisms for each installation. 8) Support status operations across the network, without requiring a user to log in on remote target machines. 9) Provide for file staging or copying of files for movement to the actual execution machine. To support batch and device requests, NQS v.2 implements three queue types--batch, device and pipe. Batch queues hold and prioritize batch requests; device queues hold and prioritize device requests; pipe queues transport both batch and device requests to other batch, device, or pipe queues at local or remote machines. Unique to batch queues are resource quota limits that restrict the amounts of different resources that a batch request can consume during execution. Unique to each device queue is a set of one or more devices, such as a line printer, to which requests can be sent for execution. Pipe queues have associated destinations to which they route and deliver requests. If the proper destination machine is down or unreachable, pipe queues are able to requeue the request and deliver it later when the destination is available. All NQS network conversations are performed using the Berkeley socket mechanism as ported into the respective vendor kernels. NQS is written in C language. The generic UNIX version (ARC-13179) has been successfully implemented on a variety of UNIX platforms, including Sun3 and Sun4 series computers, SGI IRIS computers running IRIX 3.3, DEC computers running ULTRIX 4.1, AMDAHL computers running UTS 1.3 and 2.1, platforms running BSD 4.3 UNIX. The IBM RS/6000 AIX version (COS-10042) is a vendor port. NQS 2.0 will also communicate with the Cray Research, Inc. and Convex, Inc. versions of NQS. The standard distribution medium for either machine version of NQS 2.0 is a 60Mb, QIC-24, .25 inch streaming magnetic tape cartridge in UNIX tar format. Upon request the generic UNIX version (ARC-13179) can be provided in UNIX tar format on alternate media. Please contact COSMIC to discuss the availability and cost of media to meet your specific needs. An electronic copy of the NQS 2.0 documentation is included on the program media. NQS 2.0 was released in 1991. The IBM RS/6000 port of NQS was developed in 1992. IRIX is a trademark of Silicon Graphics Inc. IRIS is a registered trademark of Silicon Graphics Inc. UNIX is a registered trademark of UNIX System Laboratories Inc. Sun3 and Sun4 are trademarks of Sun Microsystems Inc. DEC and ULTRIX are trademarks of Digital Equipment Corporation.
Artificial intelligence based decision support for trumpeter swan management
Sojda, Richard S.
2002-01-01
The number of trumpeter swans (Cygnus buccinator) breeding in the Tri-State area where Montana, Idaho, and Wyoming come together has declined to just a few hundred pairs. However, these birds are part of the Rocky Mountain Population which additionally has over 3,500 birds breeding in Alberta, British Columbia, Northwest Territories, and Yukon Territory. To a large degree, these birds seem to have abandoned traditional migratory pathways in the flyway. Waterfowl managers have been interested in decision support tools that would help them explore simulated management scenarios in their quest towards reaching population recovery and the reestablishment of traditional migratory pathways. I have developed a decision support system to assist biologists with such management, especially related to wetland ecology. Decision support systems use a combination of models, analytical techniques, and information retrieval to help develop and evaluate appropriate alternatives. Swan management is a domain that is ecologically complex, and this complexity is compounded by spatial and temporal issues. As such, swan management is an inherently distributed problem. Therefore, the ecological context for modeling swan movements in response to management actions was built as a multiagent system of interacting intelligent agents that implements a queuing model representing swan migration. These agents accessed ecological knowledge about swans, their habitats, and flyway management principles from three independent expert systems. The agents were autonomous, had some sensory capability, and could respond to changing conditions. A key problem when developing ecological decision support systems is empirically determining that the recommendations provided are valid. Because Rocky Mountain trumpeter swans have been surveyed for a long period of time, I was able to compare simulated distributions provided by the system with actual field observations across 20 areas for the period 1988-2000. Applying the Matched Pairs Multivariate Permutation Test as a statistical tool was a new approach for comparing flyway distributions of waterfowl over time that seemed to work well. Based on this approach, the empirical evidence that I gathered led me to conclude that the base queuing model does accurately simulate swan distributions in the flyway. The system was insensitive to almost all model parameters tested. That remains perplexing, but might result from the base queuing model, itself, being particularly effective at representing the actual ecological diversity in the world of Rocky Mountain trumpeter swans, both spatial and temporally.
Research Activities of the Northwest Laboratory for Integrated Systems
1987-04-06
table, and composite table (to assist evaluation of objects) are each built. The parse tree is also checked to make sure there are no meaningless...Stan- ford) as well as the Apollo DN series. All of these implementations require eight bit planes for effective use of color. Also supported are AED...time of intersection had not yet passed the queuing of the segment was delayed until that time. This algorithm had the effect of preserving the slope of
2006-09-27
Information Sciences Department, JHU/Applied Physics Laboratory, 12000 Johns Hopkins Road., Laurel, Maryland. 22104 ( PHB ) to meet the QoS requirements of...applications, e.g., (Keshav, 1997). However, to date, no work ex- ists to design and investigate PHB algorithms which simultaneously deliver QoS to...techniques to handle P&P requirements and rely upon standard, well studied QoS PHB , e.g., Weighted Round Robin, Class-Based Fair Queuing, etc., for han
NASA Technical Reports Server (NTRS)
Watson, James F., III; Desrochers, Alan A.
1991-01-01
Generalized stochastic Petri nets (GSPNs) are applied to flexible manufacturing systems (FMSs). Throughput subnets and s-transitions are presented. Two FMS examples containing nonexponential distributions which were analyzed in previous papers by queuing theory and probability theory, respectively, are treated using GSPNs developed using throughput subnets and s-transitions. The GSPN results agree with the previous results, and developing and analyzing the GSPN models are straightforward and relatively easy compared to other methodologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Turitsyn, Konstantin; Sulc, Petr
The anticipated increase in the number of plug-in electric vehicles (EV) will put additional strain on electrical distribution circuits. Many control schemes have been proposed to control EV charging. Here, we develop control algorithms based on randomized EV charging start times and simple one-way broadcast communication allowing for a time delay between communication events. Using arguments from queuing theory and statistical analysis, we seek to maximize the utilization of excess distribution circuit capacity while keeping the probability of a circuit overload negligible.
Optimising resource management in neurorehabilitation.
Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko
2014-01-01
To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.
Queuing Models of Tertiary Storage
NASA Technical Reports Server (NTRS)
Johnson, Theodore
1996-01-01
Large scale scientific projects generate and use large amounts of data. For example, the NASA Earth Observation System Data and Information System (EOSDIS) project is expected to archive one petabyte per year of raw satellite data. This data is made automatically available for processing into higher level data products and for dissemination to the scientific community. Such large volumes of data can only be stored in robotic storage libraries (RSL's) for near-line access. A characteristic of RSL's is the use of a robot arm that transfers media between a storage rack and the read/write drives, thus multiplying the capacity of the system. The performance of the RSL's can be a critical limiting factor for the performance of the archive system. However, the many interacting components of an RSL make a performance analysis difficult. In addition, different RSL components can have widely varying performance characteristics. This paper describes our work to develop performance models of an RSL in isolation. Next we show how the RSL model can be incorporated into a queuing network model. We use the models to make some example performance studies of archive systems. The models described in this paper, developed for the NASA EODIS project, are implemented in C with a well defined interface. The source code, accompanying documentation, and also sample JAVA applets are available at: http://www.cis.ufl.edu/ted/
Modeling the effect of bus stops on capacity of curb lane
NASA Astrophysics Data System (ADS)
Luo, Qingyu; Zheng, Tianyao; Wu, Wenjing; Jia, Hongfei; Li, Jin
With the increase of buses and bus lines, a negative effect on road section capacity is made by the prolonged delay and queuing time at bus stops. However, existing methods of measuring the negative effect pay little attention to different bus stop types in the curb lanes. This paper uses Gap theory and Queuing theory to build models for effect-time and potential capacity in different conditions, including curbside bus stops, bus bays with overflow and bus bays without overflow. In order to make the effect-time models accurate and reliable, two types of probabilities are introduced. One is the probability that the dwell time is less than the headway of curb lane at curbside bus stops; the other is the overflow probability at bus bays. Based on the fundamental road capacity model and effect-time models, potential capacity models of curb lane are designed. The new models are calibrated by the survey data from Changchun City, and verified by the simulation software of VISSIM. Furthermore, with different arrival rates of vehicles, the setting conditions of bus stops are researched. Results show that the potential capacity models have high precision. They can offer a reference for recognizing the effect of bus stops on the capacity of curb lane, which can provide a basis for planning, design and management of urban roads and bus stops.
System model the processing of heterogeneous sensory information in robotized complex
NASA Astrophysics Data System (ADS)
Nikolaev, V.; Titov, V.; Syryamkin, V.
2018-05-01
Analyzed the scope and the types of robotic systems consisting of subsystems of the form "a heterogeneous sensors data processing subsystem". On the basis of the Queuing theory model is developed taking into account the unevenness of the intensity of information flow from the sensors to the subsystem of information processing. Analytical solution to assess the relationship of subsystem performance and uneven flows. The research of the obtained solution in the range of parameter values of practical interest.
Temporal Traffic Dynamics Improve the Connectivity of Ad Hoc Cognitive Radio Networks
2014-02-12
more packets to send, and are (re)born when they do. We could also consider this from a duty-cycling perspective: Nodes sleep and wake up...transmitting and receiving activities in the primary network in an intricate way, we obtain the MMD by considering a flooding scheme that tries every...consider the delay caused by scheduling, contention, or queuing. It can be shown that this flooding scheme gives us the MMD. We stress that flooding is used
The study on the Layout of the Charging Station in Chengdu
NASA Astrophysics Data System (ADS)
Cai, yun; Zhang, wanquan; You, wei; Mao, pan
2018-03-01
In this paper, the comprehensive analysis of the factors affecting the layout of the electric car, considering the principle of layout of the charging station. Using queuing theory in operational research to establish mathematical model and basing on the principle of saving resource and convenient owner to optimize site number. Combining the theory of center to determine the service radius, Using the Gravity method to determine the initial location, Finally using the method of center of gravity to locate the charging station’s location.
A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF
NASA Astrophysics Data System (ADS)
Deatrich, D. C.; Liu, S. X.; Tafirout, R.
2010-04-01
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.
Perturbation analysis of queueing systems with a time-varying arrival rate
NASA Technical Reports Server (NTRS)
Cassandras, Christos G.; Pan, Jie
1991-01-01
The authors consider an M/G/1 queuing with a time-varying arrival rate. The objective is to obtain infinitesimal perturbation analysis (IPA) gradient estimates for various performance measures of interest with respect to certain system parameters. In particular, the authors consider the mean system time over n arrivals and an arrival rate alternating between two values. By choosing a convenient sample path representation of this system, they derive an unbiased IPA gradient estimator which, however, is not consistent, and investigate the nature of this problem.
Averaging principle for second-order approximation of heterogeneous models with homogeneous models.
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-11-27
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).
Averaging principle for second-order approximation of heterogeneous models with homogeneous models
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-01-01
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569
Flexible server-side processing of climate archives
NASA Astrophysics Data System (ADS)
Juckes, Martin; Stephens, Ag; Damasio da Costa, Eduardo
2014-05-01
The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.
Ruiz-Patiño, Alejandro; Acosta-Ospina, Laura Elena; Rueda, Juan-David
2017-04-01
Congestion in the postanesthesia care unit (PACU) leads to the formation of waiting queues for patients being transferred after surgery, negatively affecting hospital resources. As patients recover in the operating room, incoming surgeries are delayed. The purpose of this study was to establish the impact of this phenomenon in multiple settings. An operational mathematical study based on the queuing theory was performed. Average queue length, average queue waiting time, and daily queue waiting time were evaluated. Calculations were based on the mean patient daily flow, PACU length of stay, occupation, and current number of beds. Data was prospectively collected during a period of 2 months, and the entry and exit time was recorded for each patient taken to the PACU. Data was imputed in a computational model made with MS Excel. To account for data uncertainty, deterministic and probabilistic sensitivity analyses for all dependent variables were performed. With a mean patient daily flow of 40.3 and an average PACU length of stay of 4 hours, average total lost surgical opportunity time was estimated at 2.36 hours (95% CI: 0.36-4.74 hours). Cost of opportunity was calculated at $1592 per lost hour. Sensitivity analysis showed that an increase of two beds is required to solve the queue formation. When congestion has a negative impact on cost of opportunity in the surgical setting, queuing analysis grants definitive actions to solve the problem, improving quality of service and resource utilization. Copyright © 2016 Elsevier Inc. All rights reserved.
Flexible server-side processing of climate archives
NASA Astrophysics Data System (ADS)
Juckes, M. N.; Stephens, A.; da Costa, E. D.
2013-12-01
The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.
Yip, Kenneth; Pang, Suk-King; Chan, Kui-Tim; Chan, Chi-Kuen; Lee, Tsz-Leung
2016-08-08
Purpose - The purpose of this paper is to present a simulation modeling application to reconfigure the outpatient phlebotomy service of an acute regional and teaching hospital in Hong Kong, with an aim to improve service efficiency, shorten patient queuing time and enhance workforce utilization. Design/methodology/approach - The system was modeled as an inhomogeneous Poisson process and a discrete-event simulation model was developed to simulate the current setting, and to evaluate how various performance metrics would change if switched from a decentralized to a centralized model. Variations were then made to the model to test different workforce arrangements for the centralized service, so that managers could decide on the service's final configuration via an evidence-based and data-driven approach. Findings - This paper provides empirical insights about the relationship between staffing arrangement and system performance via a detailed scenario analysis. One particular staffing scenario was chosen by manages as it was considered to strike the best balance between performance and workforce scheduled. The resulting centralized phlebotomy service was successfully commissioned. Practical implications - This paper demonstrates how analytics could be used for operational planning at the hospital level. The authors show that a transparent and evidence-based scenario analysis, made available through analytics and simulation, greatly facilitates management and clinical stakeholders to arrive at the ideal service configuration. Originality/value - The authors provide a robust method in evaluating the relationship between workforce investment, queuing reduction and workforce utilization, which is crucial for managers when deciding the delivery model for any outpatient-related service.
Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier
2014-07-24
Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.
Modeling work of the dispatching service of high-rise building as queuing system
NASA Astrophysics Data System (ADS)
Dement'eva, Marina; Dement'eva, Anastasiya
2018-03-01
The article presents the results of calculating the performance indicators of the dispatcher service of a high-rise building as a queuing system with an unlimited queue. The calculation was carried out for three models: with a single control room and brigade of service, with a single control room and a specialized service, with several dispatch centers and specialized services. The aim of the work was to investigate the influence of the structural scheme of the organization of the dispatcher service of a high-rise building on the amount of operating costs and the time of processing and fulfilling applications. The problems of high-rise construction and their impact on the complication of exploitation are analyzed. The composition of exploitation activities of high-rise buildings is analyzed. The relevance of the study is justified by the need to review the role of dispatch services in the structure of management of the quality of buildings. Dispatching service from the lower level of management of individual engineering systems becomes the main link in the centralized automated management of the exploitation of high-rise buildings. With the transition to market relations, the criterion of profitability at the organization of the dispatching service becomes one of the main parameters of the effectiveness of its work. A mathematical model for assessing the efficiency of the dispatching service on a set of quality of service indicators is proposed. The structure of operating costs is presented. The algorithm of decision-making is given when choosing the optimal structural scheme of the dispatching service of a high-rise building.
Effects of diversity and procrastination in priority queuing theory: The different power law regimes
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2010-01-01
Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law ˜1/tα with 0<α≤1 over a time scale of years. We present a simple model for this persistence phenomenon, framed within the standard priority queuing theory, of a target task which has the lowest priority compared to all other tasks that flow on the computer of an individual. We identify a “time deficit” control parameter β and a bifurcation to a regime where there is a nonzero probability for the target task to never be completed. The distribution of waiting time T until the completion of the target task has the power law tail ˜1/t1/2 , resulting from a first-passage solution of an equivalent Wiener process. Taking into account a diversity of time deficit parameters in a population of individuals, the power law tail is changed into 1/tα , with αɛ(0.5,∞) , including the well-known case 1/t . We also study the effect of “procrastination,” defined as the situation in which the target task may be postponed or delayed even after the individual has solved all other pending tasks. This regime provides an explanation for even slower apparent decay and longer persistence.
Optimizing Endoscope Reprocessing Resources Via Process Flow Queuing Analysis.
Seelen, Mark T; Friend, Tynan H; Levine, Wilton C
2018-05-04
The Massachusetts General Hospital (MGH) is merging its older endoscope processing facilities into a single new facility that will enable high-level disinfection of endoscopes for both the ORs and Endoscopy Suite, leveraging economies of scale for improved patient care and optimal use of resources. Finalized resource planning was necessary for the merging of facilities to optimize staffing and make final equipment selections to support the nearly 33,000 annual endoscopy cases. To accomplish this, we employed operations management methodologies, analyzing the physical process flow of scopes throughout the existing Endoscopy Suite and ORs and mapping the future state capacity of the new reprocessing facility. Further, our analysis required the incorporation of historical case and reprocessing volumes in a multi-server queuing model to identify any potential wait times as a result of the new reprocessing cycle. We also performed sensitivity analysis to understand the impact of future case volume growth. We found that our future-state reprocessing facility, given planned capital expenditures for automated endoscope reprocessors (AERs) and pre-processing sinks, could easily accommodate current scope volume well within the necessary pre-cleaning-to-sink reprocessing time limit recommended by manufacturers. Further, in its current planned state, our model suggested that the future endoscope reprocessing suite at MGH could support an increase in volume of at least 90% over the next several years. Our work suggests that with simple mathematical analysis of historic case data, significant changes to a complex perioperative environment can be made with ease while keeping patient safety as the top priority.
Markov modeling and discrete event simulation in health care: a systematic comparison.
Standfield, Lachlan; Comans, Tracy; Scuffham, Paul
2014-04-01
The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.
Jagger, Pamela; Shively, Gerald
Using data from 433 firms operating along Uganda's charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of queuing, capital-at-risk, favouritism, networks, and role in the supply chain. We also find that taxes crowd-in bribery in the charcoal market.
QoS support over ultrafast TDM optical networks
NASA Astrophysics Data System (ADS)
Narvaez, Paolo; Siu, Kai-Yeung; Finn, Steven G.
1999-08-01
HLAN is a promising architecture to realize Tb/s access networks based on ultra-fast optical TDM technologies. This paper presents new research results on efficient algorithms for the support of quality of service over the HLAN network architecture. In particular, we propose a new scheduling algorithm that emulates fair queuing in a distributed manner for bandwidth allocation purpose. The proposed scheduler collects information on the queue of each host on the network and then instructs each host how much data to send. Our new scheduling algorithm ensures full bandwidth utilization, while guaranteeing fairness among all hosts.
Jagger, Pamela; Shively, Gerald
2016-01-01
Using data from 433 firms operating along Uganda’s charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of queuing, capital-at-risk, favouritism, networks, and role in the supply chain. We also find that taxes crowd-in bribery in the charcoal market. PMID:27274568
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
Research the simulation model of the passenger travel behavior in urban rail platform
NASA Astrophysics Data System (ADS)
Wang, Yujia; Yin, Xiangyong
2017-05-01
Based on the results of the research on the platform of the Beijing Chegongzhuang subway station in the line 2, the passenger travel behavior in urban rail platform is divided into 4 parts, which are the enter passenger walking, the passenger waiting distribution and queuing up before the door, passenger boarding and alighting and the alighting passengers walking, according to the social force model, simulation model was built based on Matlab software. Combined with the actual data of subway the Chegongzhuang subway station in the line 2, the simulation results show that the social force model is effective.
Spontaneous symmetry breaking in a two-lane model for bidirectional overtaking traffic
NASA Astrophysics Data System (ADS)
Appert-Rolland, C.; Hilhorst, H. J.; Schehr, G.
2010-08-01
Firstly, we consider a unidirectional flux \\bar {\\omega } of vehicles, each of which is characterized by its 'natural' velocity v drawn from a distribution P(v). The traffic flow is modeled as a collection of straight 'world lines' in the time-space plane, with overtaking events represented by a fixed queuing time τ imposed on the overtaking vehicle. This geometrical model exhibits platoon formation and allows, among many other things, for the calculation of the effective average velocity w\\equiv \\phi (v) of a vehicle of natural velocity v. Secondly, we extend the model to two opposite lanes, A and B. We argue that the queuing time τ in one lane is determined by the traffic density in the opposite lane. On the basis of reasonable additional assumptions we establish a set of equations that couple the two lanes and can be solved numerically. It appears that above a critical value \\bar {\\omega }_{\\mathrm {c}} of the control parameter \\bar {\\omega } the symmetry between the lanes is spontaneously broken: there is a slow lane where long platoons form behind the slowest vehicles, and a fast lane where overtaking is easy due to the wide spacing between the platoons in the opposite direction. A variant of the model is studied in which the spatial vehicle density \\bar {\\rho } rather than the flux \\bar {\\omega } is the control parameter. Unequal fluxes \\bar {\\omega }_{\\mathrm {A}} and \\bar {\\omega }_{\\mathrm {B}} in the two lanes are also considered. The symmetry breaking phenomenon exhibited by this model, even though no doubt hard to observe in pure form in real-life traffic, nevertheless indicates a tendency of such traffic.
A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.
The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less
de Oliveira, Daiana
2018-01-01
We assessed dairy cows’ body postures while they were performing different stationary activities in a loose housing system and then used the variation within and between individuals to identify potential connections between specific postures and the valence and arousal dimensions of emotion. We observed 72 individuals within a single milking herd focusing on their ear, neck and tail positions while they were: feeding from individual roughage bins, being brushed by a mechanical rotating brush and queuing to enter a single automatic milking system. Cows showed different ear, neck and tail postures depending on the situation. When combined, their body posture during feeding was ears back up and neck down, with tail wags directed towards the body, during queuing their ears were mainly axial and forward, their neck below the horizontal and the tail hanging stationary, and during brushing their ears were backwards and asymmetric, the neck horizontal and the tail wagging vigorously. We then placed these findings about cow body posture during routine activities into an arousal/valence framework used in animal emotion research (dimensional model of core affect). In this way we generate a priori predictions of how the positions of the ears, neck and tail of cows may change in other situations, previously demonstrated to vary in valence and arousal. We propose that this new methodology, with its different steps of integration, could contribute to the identification and validation of behavioural (postural) indicators of how positively or negatively cows experience other activities, or situations, and how calm or aroused they are. Although developed here on dairy cattle, by focusing on relevant postures, this approach could be easily adapted to other species. PMID:29718937
Caraher, M; Lloyd, S; Mansfield, M; Alp, C; Brewster, Z; Gresham, J
2016-08-01
The objective was to observe and document food behaviours of secondary school pupils from schools in a London borough. The research design combined a number of methods which included geographic information system (GIS) mapping of food outlets around three schools, systemised observations of food purchasing in those outlets before, during and after school, and focus groups conducted with pupils of those schools to gather their views in respect to those food choices. Results are summarised under the five 'A's of Access, Availability, Affordability and Acceptability & Attitudes: Access in that there were concentrations of food outlets around the schools. The majority of pupil food purchases were from newsagents, small local shops and supermarkets of chocolate, crisps (potato chips), fizzy drinks and energy drinks. Availability of fast food and unhealthy options were a feature of the streets surrounding the schools, with 200 m the optimal distance pupils were prepared to walk from and back to school at lunchtime. Affordability was ensured by the use of a consumer mentality and pupils sought out value for money offers; group purchasing of 'two for one' type offers encouraged this trend. Pupils reported healthy items on sale in school as expensive, and also that food was often sold in smaller portion sizes than that available from external food outlets. Acceptability and Attitudes, in that school food was not seen as 'cool', queuing for school food was not acceptable but queuing for food from takeaways was not viewed negatively; for younger pupils energy drinks were 'cool'. In conclusion, pupils recognised that school food was healthier but provided several reasons for not eating in school related to the five 'A's above. Copyright © 2016 Elsevier Ltd. All rights reserved.
Storage assignment optimization in a multi-tier shuttle warehousing system
NASA Astrophysics Data System (ADS)
Wang, Yanyan; Mou, Shandong; Wu, Yaohua
2016-03-01
The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
NASA Astrophysics Data System (ADS)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy
2018-03-01
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.
Architectural design and simulation of a virtual memory
NASA Technical Reports Server (NTRS)
Kwok, G.; Chu, Y.
1971-01-01
Virtual memory is an imaginary main memory with a very large capacity which the programmer has at his disposal. It greatly contributes to the solution of the dynamic storage allocation problem. The architectural design of a virtual memory is presented which implements by hardware the idea of queuing and scheduling the page requests to a paging drum in such a way that the access of the paging drum is increased many times. With the design, an increase of up to 16 times in page transfer rate is achievable when the virtual memory is heavily loaded. This in turn makes feasible a great increase in the system throughput.
Estimating the Effects of the Terminal Area Productivity Program
NASA Technical Reports Server (NTRS)
Lee, David A.; Kostiuk, Peter F.; Hemm, Robert V., Jr.; Wingrove, Earl R., III; Shapiro, Gerald
1997-01-01
The report describes methods and results of an analysis of the technical and economic benefits of the systems to be developed in the NASA Terminal Area Productivity (TAP) program. A runway capacity model using parameters that reflect the potential impact of the TAP technologies is described. The runway capacity model feeds airport specific models which are also described. The capacity estimates are used with a queuing model to calculate aircraft delays, and TAP benefits are determined by calculating the savings due to reduced delays. The report includes benefit estimates for Boston Logan and Detroit Wayne County airports. An appendix includes a description and listing of the runway capacity model.
SNR-based queue observations at CFHT
NASA Astrophysics Data System (ADS)
Devost, Daniel; Moutou, Claire; Manset, Nadine; Mahoney, Billy; Burdullis, Todd; Cuillandre, Jean-Charles; Racine, René
2016-07-01
In an effort to optimize the night time utilizing the exquisite weather on Maunakea, CFHT has equipped its dome with vents and is now moving its Queued Scheduled Observing (QSO)1 based operations toward Signal to Noise Ratio (SNR) observing. In this new mode, individual exposure times for a science program are estimated using a model that uses measurements of the weather conditions as input and the science program is considered completed when the depth required by the scientific requirements are reached. These changes allow CFHT to make better use of the excellent seeing conditions provided by Maunakea, allowing us to complete programs in a shorter time than allocated to the science programs.
Data management system performance modeling
NASA Technical Reports Server (NTRS)
Kiser, Larry M.
1993-01-01
This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...
2018-03-06
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Interest-Driven Model for Human Dynamics
NASA Astrophysics Data System (ADS)
Shang, Ming-Sheng; Chen, Guan-Xiong; Dai, Shuang-Xing; Wang, Bing-Hong; Zhou, Tao
2010-04-01
Empirical observations indicate that the interevent time distribution of human actions exhibits heavy-tailed features. The queuing model based on task priorities is to some extent successful in explaining the origin of such heavy tails, however, it cannot explain all the temporal statistics of human behavior especially for the daily entertainments. We propose an interest-driven model, which can reproduce the power-law distribution of interevent time. The exponent can be analytically obtained and is in good accordance with the simulations. This model well explains the observed relationship between activities and power-law exponents, as reported recently for web-based behavior and the instant message communications.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Quantifying Cyber-Resilience Against Resource-Exhaustion Attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, Glenn A.; Griswold, Richard L.; Beech, Zachary W.
2014-07-11
Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engi- neering, the resilience of a substance is mathematically defined as the area under the stress vs. strain curve. We took inspiration from mechanics in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in language and engineering terms and then translate these definitions to information sciences. Then we tested our definitions of resilience for a very simple problem in networked queuing systems. We discuss lessons learned and make recommendations for using this approach in futuremore » work.« less
A general model for memory interference in a multiprocessor system with memory hierarchy
NASA Technical Reports Server (NTRS)
Taha, Badie A.; Standley, Hilda M.
1989-01-01
The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.
1982-06-01
of states and a class of cascade processes. Proc. Cambridge Phil . Soc. 47, 77-85. Foster, F. G. (1952). On Markov chains with an enumerable infinity of...states. Proc. Cambridge Phil . Soc. 48, 587-591. Foster, F. G. (1953). On the stochastic matrices associated with certain queuing processes. Ann. Math...Frawley (1973), Li and Schucany (1975), Schucany and Beckett (1976), and Hollander and Sethurann (1978). The Schucany-Frawley-Li test is based on the
News from the CFHT/ESPaDOnS spectropolarimeter
NASA Astrophysics Data System (ADS)
Moutou, C.; Malo, L.; Manset, N.; Selliez-Vandernotte, L.; Desrochers, M.-E.
2015-12-01
The ESPaDOnS spectropolarimeter has been in use on the Canada-France-Hawaii Telescope (CFHT) since 2004, for studying stars, galactic objects and planets. ESPaDOnS is used in queued service observing mode since 2008, which allows an optimization of the science outcome. In this article, we summarize the new functionalities and analyses made on ESPaDOnS operations and data for the present and future users. These modifications include: signal-to-noise ratio based observing, radial velocity nightly drifts, the OPERA pipeline under development, the measurement of H2O content in the Maunakea sky, and the use of ESPaDOnS with the neighbour telescope Gemini.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
The evaluation model of the design of toll
NASA Astrophysics Data System (ADS)
Feng, Shuting
2018-04-01
In recent years, the dramatic increase in traffic burden has highlighted the necessity of rational allocation of toll plaza. At the same time, the need to consider a lot of factors has enhanced the design requirements. In this background, we carry out research on this subject. We propose a reasonable assumption, and abstract the toll plaza into a model only related to B and L. By using the queuing theory and traffic flow theory, we define the throughput, cost and accident prevent with B and L to acquire the base model. By using the method of linear weighting in economics to calculate this model, the optimal B and L strategies are obtained.
Nelson, Carl A; Miller, David J; Oleynikov, Dmitry
2008-01-01
As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.
How to Take HRMS Process Management to the Next Level with Workflow Business Event System
NASA Technical Reports Server (NTRS)
Rajeshuni, Sarala; Yagubian, Aram; Kunamaneni, Krishna
2006-01-01
Oracle Workflow with the Business Event System offers a complete process management solution for enterprises to manage business processes cost-effectively. Using Workflow event messaging, event subscriptions, AQ Servlet and advanced queuing technologies, this presentation will demonstrate the step-by-step design and implementation of system solutions in order to integrate two dissimilar systems and establish communication remotely. As a case study, the presentation walks you through the process of propagating organization name changes in other applications that originated from the HRMS module without changing applications code. The solution can be applied to your particular business cases for streamlining or modifying business processes across Oracle and non-Oracle applications.
The Northwest Indiana Robotic Telescope
NASA Astrophysics Data System (ADS)
Slavin, Shawn D.; Rengstorf, A. W.; Aros, J. C.; Segally, W. B.
2011-01-01
The Northwest Indiana Robotic (NIRo) Telescope is a remote, automated observing facility recently built by Purdue University Calumet (PUC) at a site in Lowell, IN, approximately 30 miles from the PUC campus. The recently dedicated observatory will be used for broadband and narrowband optical observations by PUC students and faculty, as well as pre-college students through the implementation of standards-based, middle-school modules developed by PUC astronomers and education faculty. The NIRo observatory and its web portal are the central technical elements of a project to improve astronomy education at Purdue Calumet and, more broadly, to improve science education in middle schools of the surrounding region. The NIRo Telescope is a 0.5-meter (20-inch) Ritchey-Chrétien design on a Paramount ME robotic mount, featuring a seven-position filter wheel (UBVRI, Hα, Clear), Peltier (thermoelectrically) cooled CCD camera with 3056 x 3056, square, 12 μm pixels, and off-axis guiding. It provides a coma-free imaging field of 0.5 degrees square, with a plate scale of 0.6 arcseconds per pixel. The observatory has a wireless internet connection, local weather station which publishes data to an internet weather site, and a suite of CCTV security cameras on an IP-based, networked video server. Control of power to every piece of instrumentation is maintained via internet-accessible power distribution units. The telescope can be controlled on-site, or off-site in an attended fashion via an internet connection, but will be used primarily in an unattended mode of automated observation, where queued observations will be scheduled daily from a database of requests. Completed observational data from queued operation will be stored on a campus-based server, which also runs the web portal and observation database. Partial support for this work was provided by the National Science Foundation's Course, Curriculum, and Laboratory Improvement (CCLI) program under Award No. 0736592.
Use of an administrative data set to determine optimal scheduling of an alcohol intervention worker.
Peterson, Timothy A; Desmond, Jeffrey S; Cunningham, Rebecca
2012-06-01
Brief alcohol interventions are efficacious in reducing alcohol-related consequences among emergency department (ED) patients. Use of non-clinical staff may increase alcohol screening and intervention; however, optimal scheduling of an alcohol intervention worker (AIW) is unknown. Determine optimal scheduling of an AIW based on peak discharge time of alcohol-related ED visits. Discharge times for consecutive patients with an alcohol-related diagnosis were abstracted from an urban ED's administrative data set from September 2005 through August 2007. Queuing theory was used to identify optimal scheduling. Data for weekends and weekdays were analyzed separately. Stationary independent period-by-period analysis was performed for hourly periods. An M/M/s queuing model, for Markovian inter-arrival time/Markovian service time/and potentially more than one server, was developed for each hour assuming: 1) a single unlimited queue; 2) 75% of patients waited no longer than 30 min for intervention; 3) AIW spent an average 20 min/patient. Estimated average utilization/hour was calculated; if utilization/hour exceeded 25%, AIW staff was considered necessary. There were 2282 patient visits (mean age 38 years, range 11-84 years). Weekdays accounted for 45% of visits; weekends 55%. On weekdays, one AIW from 6:00 a.m.-9:00 a.m. (max utilization 42%/hour) would accommodate 28% of weekday alcohol-related patients. On weekends, 5:00 a.m.-11:00 a.m. (max utilization 50%), one AIW would cover 54% of all weekend alcohol-related visits. During other hours the utilization rate falls below 25%/hour. Evaluating 2 years of discharge data revealed that 30 h of dedicated AIW time--18 weekend hours (5:00 a.m.-11:00 a.m.), 12 weekday hours (6:00 a.m.-9:00 a.m.)--would allow maximal patient alcohol screening and intervention with minimal additional burden to clinical staff. Copyright © 2012 Elsevier Inc. All rights reserved.
CCSDS Advanced Orbiting Systems Virtual Channel Access Service for QoS MACHETE Model
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Segui, John S.
2011-01-01
To support various communications requirements imposed by different missions, interplanetary communication protocols need to be designed, validated, and evaluated carefully. Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in "Simulator of Space Communication Networks" (NPO-41373), NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. By building abstract behavioral models of network protocols, one can validate performance after identifying the appropriate metrics of interest. The innovators have extended the MACHETE model library to include a generic link-layer Virtual Channel (VC) model supporting quality-of-service (QoS) controls based on IP streams. The main purpose of this generic Virtual Channel model addition was to interface fine-grain flow-based QoS (quality of service) between the network and MAC layers of the QualNet simulator, a commercial component of MACHETE. This software model adds the capability of mapping IP streams, based on header fields, to virtual channel numbers, allowing extended QoS handling at link layer. This feature further refines the QoS v existing at the network layer. QoS at the network layer (e.g. diffserv) supports few QoS classes, so data from one class will be aggregated together; differentiating between flows internal to a class/priority is not supported. By adding QoS classification capability between network and MAC layers through VC, one maps multiple VCs onto the same physical link. Users then specify different VC weights, and different queuing and scheduling policies at the link layer. This VC model supports system performance analysis of various virtual channel link-layer QoS queuing schemes independent of the network-layer QoS systems.
A web-based appointment system to reduce waiting for outpatients: a retrospective study.
Cao, Wenjun; Wan, Yi; Tu, Haibo; Shang, Fujun; Liu, Danhong; Tan, Zhijun; Sun, Caihong; Ye, Qing; Xu, Yongyong
2011-11-22
Long waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients. Data from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site. A total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001). Compared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.
Simple Models for Airport Delays During Transition to a Trajectory-Based Air Traffic System
NASA Astrophysics Data System (ADS)
Brooker, Peter
It is now widely recognised that a paradigm shift in air traffic control concepts is needed. This requires state-of-the-art innovative technologies, making much better use of the information in the air traffic management (ATM) system. These paradigm shifts go under the names of NextGen in the USA and SESAR in Europe, which inter alia will make dramatic changes to the nature of airport operations. A vital part of moving from an existing system to a new paradigm is the operational implications of the transition process. There would be business incentives for early aircraft fitment, it is generally safer to introduce new technologies gradually, and researchers are already proposing potential transition steps to the new system. Simple queuing theory models are used to establish rough quantitative estimates of the impact of the transition to a more efficient time-based navigational and ATM system. Such models are approximate, but they do offer insight into the broad implications of system change and its significant features. 4D-equipped aircraft in essence have a contract with the airport runway and, in return, they would get priority over any other aircraft waiting for use of the runway. The main operational feature examined here is the queuing delays affecting non-4D-equipped arrivals. These get a reasonable service if the proportion of 4D-equipped aircraft is low, but this can deteriorate markedly for high proportions, and be economically unviable. Preventative measures would be to limit the additional growth of 4D-equipped flights and/or to modify their contracts to provide sufficient space for the non-4D-equipped flights to operate without excessive delays. There is a potential for non-Poisson models, for which there is little in the literature, and for more complex models, e.g. grouping a succession of 4D-equipped aircraft as a batch.
Hospital bed occupancy: more than queuing for a bed.
Keegan, Andrew D
2010-09-06
Timely access to safe hospital care remains a major concern. Target bed-occupancy rates have been proposed as a measure of the ability of a hospital to function safely and effectively. High bed-occupancy rates have been shown to be associated with greater risks of hospital-associated infection and access block and to have a negative impact on staff health. Clinical observational data have suggested that bed occupancies above 85% could adversely affect safe, effective hospital function. Using this figure, at least initially, would be of value in the planning and operational management of public hospital beds in Australia. There is an urgent need to develop meaningful outcome measures of patient care that could replace the process measures currently in use.
A Seed-Based Plant Propagation Algorithm: The Feeding Station Model
Salhi, Abdellah
2015-01-01
The seasonal production of fruit and seeds is akin to opening a feeding station, such as a restaurant. Agents coming to feed on the fruit are like customers attending the restaurant; they arrive at a certain rate and get served at a certain rate following some appropriate processes. The same applies to birds and animals visiting and feeding on ripe fruit produced by plants such as the strawberry plant. This phenomenon underpins the seed dispersion of the plants. Modelling it as a queuing process results in a seed-based search/optimisation algorithm. This variant of the Plant Propagation Algorithm is described, analysed, tested on nontrivial problems, and compared with well established algorithms. The results are included. PMID:25821858
Time studies in A&E departments--a useful tool for management.
Aharonson-Daniel, L; Fung, H; Hedley, A J
1996-01-01
A time and motion study was conducted in an accident and emergency (A&E) department in a Hong Kong Government hospital in order to suggest solutions for severe queuing problems found in A&E. The study provided useful information about the patterns of arrival and service; the throughput; and the factors that influence the length of the queue at the A&E department. Plans for building a computerized simulation model were dropped as new intelligence generated by the study enabled problem solving using simple statistical analysis and common sense. Demonstrates some potential benefits for management in applying operations research methods in busy clinical working environments. The implementation of the recommendations made by this study successfully eliminated queues in A&E.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attinella, John E.; Davis, Kristan D.; Musselman, Roy G.
Methods, apparatuses, and computer program products for servicing a globally broadcast interrupt signal in a multi-threaded computer comprising a plurality of processor threads. Embodiments include an interrupt controller indicating in a plurality of local interrupt status locations that a globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include a thread determining that a local interrupt status location corresponding to the thread indicates that the globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include the thread processing one or more entries in a global interrupt status bit queue based on whethermore » global interrupt status bits associated with the globally broadcast interrupt signal are locked. Each entry in the global interrupt status bit queue corresponds to a queued global interrupt.« less
LiFi based automated shopping assistance application in IoT
NASA Astrophysics Data System (ADS)
Akter, Sharmin; Funke Olanrewaju, Rashidah, Dr; Islam, Thouhedul; Salma
2018-05-01
Urban people minimize shopping time in daily life due to time constrain. From that point of view, the concept of supermarket is being popular while consumers can buy different items from same place. However, customer spends hours and hours to find desired items in a large supermarket. In addition, it’s also required to be queued during payment at counter that is also time consuming. As a result, a customer has to spend 2-3 hours for shopping in a large superstore. This paper proposes an Internet of Things and Li-Fi based automated application for smart phone and web to find items easily during shopping that can save consumer’s time as well as reduce man power in supermarket.
Modeling users' activity on Twitter networks: validation of Dunbar's number
NASA Astrophysics Data System (ADS)
Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2012-02-01
Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
EpiPOD : community vaccination and dispensing model user's guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, M.; Samsa, M.; Walsh, D.
EpiPOD is a modeling system that enables local, regional, and county health departments to evaluate and refine their plans for mass distribution of antiviral and antibiotic medications and vaccines. An intuitive interface requires users to input as few or as many plan specifics as are available in order to simulate a mass treatment campaign. Behind the input interface, a system dynamics model simulates pharmaceutical supply logistics, hospital and first-responder personnel treatment, population arrival dynamics and treatment, and disease spread. When the simulation is complete, users have estimates of the number of illnesses in the population at large, the number ofmore » ill persons seeking treatment, and queuing and delays within the mass treatment system--all metrics by which the plan can be judged.« less
NASA Astrophysics Data System (ADS)
Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław
2018-02-01
In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.
Stochastic Stability in Internet Router Congestion Games
NASA Astrophysics Data System (ADS)
Chung, Christine; Pyrga, Evangelia
Congestion control at bottleneck routers on the internet is a long standing problem. Many policies have been proposed for effective ways to drop packets from the queues of these routers so that network endpoints will be inclined to share router capacity fairly and minimize the overflow of packets trying to enter the queues. We study just how effective some of these queuing policies are when each network endpoint is a self-interested player with no information about the other players’ actions or preferences. By employing the adaptive learning model of evolutionary game theory, we study policies such as Droptail, RED, and the greedy-flow-punishing policy proposed by Gao et al. [10] to find the stochastically stable states: the states of the system that will be reached in the long run.
Analysis and improvement measures of flight delay in China
NASA Astrophysics Data System (ADS)
Zang, Yuhang
2017-03-01
Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.
Modeling Users' Activity on Twitter Networks: Validation of Dunbar's Number
Gonçalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2011-01-01
Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior. PMID:21826200
Developing a cross-docking network design model under uncertain environment
NASA Astrophysics Data System (ADS)
Seyedhoseini, S. M.; Rashid, Reza; Teimoury, E.
2015-06-01
Cross-docking is a logistic concept, which plays an important role in supply chain management by decreasing inventory holding, order packing, transportation costs and delivery time. Paying attention to these concerns, and importance of the congestion in cross docks, we present a mixed-integer model to optimize the location and design of cross docks at the same time to minimize the total transportation and operating costs. The model combines queuing theory for design aspects, for that matter, we consider a network of cross docks and customers where two M/M/c queues have been represented to describe operations of indoor trucks and outdoor trucks in each cross dock. To prepare a perfect illustration for performance of the model, a real case also has been examined that indicated effectiveness of the proposed model.
Servicing a globally broadcast interrupt signal in a multi-threaded computer
Attinella, John E.; Davis, Kristan D.; Musselman, Roy G.; Satterfield, David L.
2015-12-29
Methods, apparatuses, and computer program products for servicing a globally broadcast interrupt signal in a multi-threaded computer comprising a plurality of processor threads. Embodiments include an interrupt controller indicating in a plurality of local interrupt status locations that a globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include a thread determining that a local interrupt status location corresponding to the thread indicates that the globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include the thread processing one or more entries in a global interrupt status bit queue based on whether global interrupt status bits associated with the globally broadcast interrupt signal are locked. Each entry in the global interrupt status bit queue corresponds to a queued global interrupt.
Space Link Extension Protocol Emulation for High-Throughput, High-Latency Network Connections
NASA Technical Reports Server (NTRS)
Tchorowski, Nicole; Murawski, Robert
2014-01-01
New space missions require higher data rates and new protocols to meet these requirements. These high data rate space communication links push the limitations of not only the space communication links, but of the ground communication networks and protocols which forward user data to remote ground stations (GS) for transmission. The Consultative Committee for Space Data Systems, (CCSDS) Space Link Extension (SLE) standard protocol is one protocol that has been proposed for use by the NASA Space Network (SN) Ground Segment Sustainment (SGSS) program. New protocol implementations must be carefully tested to ensure that they provide the required functionality, especially because of the remote nature of spacecraft. The SLE protocol standard has been tested in the NASA Glenn Research Center's SCENIC Emulation Lab in order to observe its operation under realistic network delay conditions. More specifically, the delay between then NASA Integrated Services Network (NISN) and spacecraft has been emulated. The round trip time (RTT) delay for the continental NISN network has been shown to be up to 120ms; as such the SLE protocol was tested with network delays ranging from 0ms to 200ms. Both a base network condition and an SLE connection were tested with these RTT delays, and the reaction of both network tests to the delay conditions were recorded. Throughput for both of these links was set at 1.2Gbps. The results will show that, in the presence of realistic network delay, the SLE link throughput is significantly reduced while the base network throughput however remained at the 1.2Gbps specification. The decrease in SLE throughput has been attributed to the implementation's use of blocking calls. The decrease in throughput is not acceptable for high data rate links, as the link requires constant data a flow in order for spacecraft and ground radios to stay synchronized, unless significant data is queued a the ground station. In cases where queuing the data is not an option, such as during real time transmissions, the SLE implementation cannot support high data rate communication.
COSMOS: Python library for massively parallel workflows
Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.
2014-01-01
Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kranz, L.; VanKuiken, J.C.; Gillette, J.L.
1989-12-01
The STATS model, now modified to run on microcomputers, uses user- defined component uncertainties to calculate composite uncertainty distributions for systems or technologies. The program can be used to investigate uncertainties for a single technology on to compare two technologies. Although the term technology'' is used throughout the program screens, the program can accommodate very broad problem definitions. For example, electrical demand uncertainties, health risks associated with toxic material exposures, or traffic queuing delay times can be estimated. The terminology adopted in this version of STATS reflects the purpose of the earlier version, which was to aid in comparing advancedmore » electrical generating technologies. A comparison of two clean coal technologies in two power plants is given as a case study illustration. 7 refs., 35 figs., 7 tabs.« less
Gateway-Assisted Retransmission for Lightweight and Reliable IoT Communications.
Chang, Hui-Ling; Wang, Cheng-Gang; Wu, Mong-Ting; Tsai, Meng-Hsun; Lin, Chia-Ying
2016-09-22
Message Queuing Telemetry Transport for Sensor Networks (MQTT-SN) and Constrained Application Protocol (CoAP) are two protocols supporting publish/subscribe models for IoT devices to publish messages to interested subscribers. Retransmission mechanisms are introduced to compensate for the lack of data reliability. If the device does not receive the acknowledgement (ACK) before retransmission timeout (RTO) expires, the device will retransmit data. Setting an appropriate RTO is important because the delay may be large or retransmission may be too frequent when the RTO is inappropriate. We propose a Gateway-assisted CoAP (GaCoAP) to dynamically compute RTO for devices. Simulation models are proposed to investigate the performance of GaCoAP compared with four other methods. The experiment results show that GaCoAP is more suitable for IoT devices.
COSMOS: Python library for massively parallel workflows.
Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J
2014-10-15
Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Job-mix modeling and system analysis of an aerospace multiprocessor.
NASA Technical Reports Server (NTRS)
Mallach, E. G.
1972-01-01
An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.
Power law signature of media exposure in human response waiting time distributions
NASA Astrophysics Data System (ADS)
Crane, Riley; Schweitzer, Frank; Sornette, Didier
2010-05-01
We study the humanitarian response to the destruction brought by the tsunami generated by the Sumatra earthquake of December 26, 2004, as measured by donations, and find that it decays in time as a power law ˜1/tα with α=2.5±0.1 . This behavior is suggested to be the rare outcome of a priority queuing process in which individuals execute tasks at a rate slightly faster than the rate at which new tasks arise. We believe this to be an empirical evidence documenting the recently predicted [G. Grinstein and R. Linsker, Phys. Rev. E 77, 012101 (2008)] regime, and provide additional independent evidence that suggests that this “highly attentive regime” arises as a result of the intense focus placed on this donation “task” by the media.
Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Pentakalos, Odysseas I.
1995-01-01
Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
Capacity-constrained traffic assignment in networks with residual queues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, W.H.K.; Zhang, Y.
2000-04-01
This paper proposes a capacity-constrained traffic assignment model for strategic transport planning in which the steady-state user equilibrium principle is extended for road networks with residual queues. Therefore, the road-exit capacity and the queuing effects can be incorporated into the strategic transport model for traffic forecasting. The proposed model is applicable to the congested network particularly when the traffic demands exceeds the capacity of the network during the peak period. An efficient solution method is proposed for solving the steady-state traffic assignment problem with residual queues. Then a simple numerical example is employed to demonstrate the application of the proposedmore » model and solution method, while an example of a medium-sized arterial highway network in Sioux Falls, South Dakota, is used to test the applicability of the proposed solution to real problems.« less
Gateway-Assisted Retransmission for Lightweight and Reliable IoT Communications
Chang, Hui-Ling; Wang, Cheng-Gang; Wu, Mong-Ting; Tsai, Meng-Hsun; Lin, Chia-Ying
2016-01-01
Message Queuing Telemetry Transport for Sensor Networks (MQTT-SN) and Constrained Application Protocol (CoAP) are two protocols supporting publish/subscribe models for IoT devices to publish messages to interested subscribers. Retransmission mechanisms are introduced to compensate for the lack of data reliability. If the device does not receive the acknowledgement (ACK) before retransmission timeout (RTO) expires, the device will retransmit data. Setting an appropriate RTO is important because the delay may be large or retransmission may be too frequent when the RTO is inappropriate. We propose a Gateway-assisted CoAP (GaCoAP) to dynamically compute RTO for devices. Simulation models are proposed to investigate the performance of GaCoAP compared with four other methods. The experiment results show that GaCoAP is more suitable for IoT devices. PMID:27669243
The Elixir System: Data Characterization and Calibration at the Canada-France-Hawaii Telescope
NASA Astrophysics Data System (ADS)
Magnier, E. A.; Cuillandre, J.-C.
2004-05-01
The Elixir System at the Canada-France-Hawaii Telescope performs data characterization and calibration for all data from the wide-field mosaic imagers CFH12K and MegaPrime. The project has several related goals, including monitoring data quality, providing high-quality master detrend images, determining the photometric and astrometric calibrations, and automatic preprocessing of images for queued service observing (QSO). The Elixir system has been used for all data obtained with CFH12K since the QSO project began in 2001 January. In addition, it has been used to process archival data from the CFH12K and all MegaPrime observations beginning in 2002 December. The Elixir system has been extremely successful in providing well-characterized data to the end observers, who may otherwise be overwhelmed by data-processing concerns.
Efficient priority queueing routing strategy on networks of mobile agents
NASA Astrophysics Data System (ADS)
Wu, Gan-Hua; Yang, Hui-Jie; Pan, Jia-Hui
2018-03-01
As a consequence of their practical implications for communications networks, traffic dynamics on complex networks have recently captivated researchers. Previous routing strategies for improving transport efficiency have paid little attention to the orders in which the packets should be forwarded, just simply used first-in-first-out queue discipline. Here, we apply a priority queuing discipline and propose a shortest-distance-first routing strategy on networks of mobile agents. Numerical experiments reveal that the proposed scheme remarkably improves both the network throughput and the packet arrival rate and reduces both the average traveling time and the rate of waiting time to traveling time. Moreover, we find that the network capacity increases with an increase in both the communication radius and the number of agents. Our work may be helpful for the design of routing strategies on networks of mobile agents.
Fleet Sizing of Automated Material Handling Using Simulation Approach
NASA Astrophysics Data System (ADS)
Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny
2018-03-01
Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software
Developing a new stochastic competitive model regarding inventory and price
NASA Astrophysics Data System (ADS)
Rashid, Reza; Bozorgi-Amiri, Ali; Seyedhoseini, S. M.
2015-09-01
Within the competition in today's business environment, the design of supply chains becomes more complex than before. This paper deals with the retailer's location problem when customers choose their vendors, and inventory costs have been considered for retailers. In a competitive location problem, price and location of facilities affect demands of customers; consequently, simultaneous optimization of the location and inventory system is needed. To prepare a realistic model, demand and lead time have been assumed as stochastic parameters, and queuing theory has been used to develop a comprehensive mathematical model. Due to complexity of the problem, a branch and bound algorithm has been developed, and its performance has been validated in several numerical examples, which indicated effectiveness of the algorithm. Also, a real case has been prepared to demonstrate performance of the model for real world.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
Incorporating Active Runway Crossings in Airport Departure Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2010-01-01
A mixed integer linear program is presented for deterministically scheduling departure and ar rival aircraft at airport runways. This method addresses different schemes of managing the departure queuing area by treating it as first-in-first-out queues or as a simple par king area where any available aircraft can take-off ir respective of its relative sequence with others. In addition, this method explicitly considers separation criteria between successive aircraft and also incorporates an optional prioritization scheme using time windows. Multiple objectives pertaining to throughput and system delay are used independently. Results indicate improvement over a basic first-come-first-serve rule in both system delay and throughput. Minimizing system delay results in small deviations from optimal throughput, whereas minimizing throughput results in large deviations in system delay. Enhancements for computational efficiency are also presented in the form of reformulating certain constraints and defining additional inequalities for better bounds.
NASA Astrophysics Data System (ADS)
Sarsimbayeva, S. M.; Kospanova, K. K.
2015-11-01
The article provides the discussion of matters associated with the problems of transferring of object-oriented Windows applications from C++ programming language to .Net platform using C# programming language. C++ has always been considered to be the best language for the software development, but the implicit mistakes that come along with the tool may lead to infinite memory leaks and other errors. The platform .Net and the C#, made by Microsoft, are the solutions to the issues mentioned above. The world economy and production are highly demanding applications developed by C++, but the new language with its stability and transferability to .Net will bring many advantages. An example can be presented using the applications that imitate the work of queuing systems. Authors solved the problem of transferring of an application, imitating seaport works, from C++ to the platform .Net using C# in the scope of Visual Studio.
Optimizing Resource Utilization in Grid Batch Systems
NASA Astrophysics Data System (ADS)
Gellrich, Andreas
2012-12-01
On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.
Paying for Express Checkout: Competition and Price Discrimination in Multi-Server Queuing Systems
Deck, Cary; Kimbrough, Erik O.; Mongrain, Steeve
2014-01-01
We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus. PMID:24667809
A Computer Graphics Human Figure Application Of Biostereometrics
NASA Astrophysics Data System (ADS)
Fetter, William A.
1980-07-01
A study of improved computer graphic representation of the human figure is being conducted under a National Science Foundation grant. Special emphasis is given biostereometrics as a primary data base from which applications requiring a variety of levels of detail may be prepared. For example, a human figure represented by a single point can be very useful in overview plots of a population. A crude ten point figure can be adequate for queuing theory studies and simulated movement of groups. A one hundred point figure can usefully be animated to achieve different overall body activities including male and female figures. A one thousand point figure si-milarly animated, begins to be useful in anthropometrics and kinesiology gross body movements. Extrapolations of this order-of-magnitude approach ultimately should achieve very complex data bases and a program which automatically selects the correct level of detail for the task at hand. See Summary Figure 1.
Human dynamics revealed through Web analytics
NASA Astrophysics Data System (ADS)
Gonçalves, Bruno; Ramasco, José J.
2008-08-01
The increasing ubiquity of Internet access and the frequency with which people interact with it raise the possibility of using the Web to better observe, understand, and monitor several aspects of human social behavior. Web sites with large numbers of frequently returning users are ideal for this task. If these sites belong to companies or universities, their usage patterns can furnish information about the working habits of entire populations. In this work, we analyze the properly anonymized logs detailing the access history to Emory University’s Web site. Emory is a medium-sized university located in Atlanta, Georgia. We find interesting structure in the activity patterns of the domain and study in a systematic way the main forces behind the dynamics of the traffic. In particular, we find that linear preferential linking, priority-based queuing, and the decay of interest for the contents of the pages are the essential ingredients to understand the way users navigate the Web.
Suss, Samuel; Bhuiyan, Nadia; Demirli, Kudret; Batist, Gerald
2017-06-01
Outpatient cancer treatment centers can be considered as complex systems in which several types of medical professionals and administrative staff must coordinate their work to achieve the overall goals of providing quality patient care within budgetary constraints. In this article, we use analytical methods that have been successfully employed for other complex systems to show how a clinic can simultaneously reduce patient waiting times and non-value added staff work in a process that has a series of steps, more than one of which involves a scarce resource. The article describes the system model and the key elements in the operation that lead to staff rework and patient queuing. We propose solutions to the problems and provide a framework to evaluate clinic performance. At the time of this report, the proposals are in the process of implementation at a cancer treatment clinic in a major metropolitan hospital in Montreal, Canada.
Xayaphoummine, A.; Bucher, T.; Isambert, H.
2005-01-01
The Kinefold web server provides a web interface for stochastic folding simulations of nucleic acids on second to minute molecular time scales. Renaturation or co-transcriptional folding paths are simulated at the level of helix formation and dissociation in agreement with the seminal experimental results. Pseudoknots and topologically ‘entangled’ helices (i.e. knots) are efficiently predicted taking into account simple geometrical and topological constraints. To encourage interactivity, simulations launched as immediate jobs are automatically stopped after a few seconds and return adapted recommendations. Users can then choose to continue incomplete simulations using the batch queuing system or go back and modify suggested options in their initial query. Detailed output provide (i) a series of low free energy structures, (ii) an online animated folding path and (iii) a programmable trajectory plot focusing on a few helices of interest to each user. The service can be accessed at . PMID:15980546
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
Paying for express checkout: competition and price discrimination in multi-server queuing systems.
Deck, Cary; Kimbrough, Erik O; Mongrain, Steeve
2014-01-01
We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus.
Social influence, agent heterogeneity and the emergence of the urban informal sector
NASA Astrophysics Data System (ADS)
García-Díaz, César; Moreno-Monroy, Ana I.
2012-02-01
We develop an agent-based computational model in which the urban informal sector acts as a buffer where rural migrants can earn some income while queuing for higher paying modern-sector jobs. In the model, the informal sector emerges as a result of rural-urban migration decisions of heterogeneous agents subject to social influence in the form of neighboring effects of varying strengths. Besides using a multinomial logit choice model that allows for agent idiosyncrasy, explicit agent heterogeneity is introduced in the form of socio-demographic characteristics preferred by modern-sector employers. We find that different combinations of the strength of social influence and the socio-economic composition of the workforce lead to very different urbanization and urban informal sector shares. In particular, moderate levels of social influence and a large proportion of rural inhabitants with preferred socio-demographic characteristics are conducive to a higher urbanization rate and a larger informal sector.
A queueing theory based model for business continuity in hospitals.
Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R
2013-01-01
Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals.
Network Configuration Analysis for Formation Flying Satellites
NASA Technical Reports Server (NTRS)
Knoblock, Eric J.; Wallett, Thomas M.; Konangi, Vijay K.; Bhasin, Kul B.
2001-01-01
The performance of two networks to support autonomous multi-spacecraft formation flying systems is presented. Both systems are comprised of a ten-satellite formation, with one of the satellites designated as the central or 'mother ship.' All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/EP over ATM protocol architecture within the formation, and the second system uses the IEEE 802.11 protocol architecture within the formation. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IP queuing delay, IP queue size and IP processing delay at the mother ship as well as end-to-end delay for both systems. In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.
Territory inheritance in clownfish.
Buston, Peter M
2004-01-01
Animal societies composed of breeders and non-breeders present a challenge to evolutionary theory because it is not immediately apparent how natural selection can preserve the genes that underlie non-breeding strategies. The clownfish Amphiprion percula forms groups composed of a breeding pair and 0-4 non-breeders. Non-breeders gain neither present direct, nor present indirect benefits from the association. To determine whether non-breeders obtain future direct benefits, I investigated the pattern of territory inheritance. I show that non-breeders stand to inherit the territory within which they reside. Moreover, they form a perfect queue for breeding positions; a queue from which nobody disperses and within which nobody contests. I suggest that queuing might be favoured by selection because it confers a higher probability of attaining breeding status than either dispersing or contesting. This study illustrates that, within animal societies, individuals may tolerate non-breeding positions solely because of their potential to realize benefits in the future. PMID:15252999
IBM NJE protocol emulator for VAX/VMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1981-01-01
Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less
Territory inheritance in clownfish.
Buston, Peter M
2004-05-07
Animal societies composed of breeders and non-breeders present a challenge to evolutionary theory because it is not immediately apparent how natural selection can preserve the genes that underlie non-breeding strategies. The clownfish Amphiprion percula forms groups composed of a breeding pair and 0-4 non-breeders. Non-breeders gain neither present direct, nor present indirect benefits from the association. To determine whether non-breeders obtain future direct benefits, I investigated the pattern of territory inheritance. I show that non-breeders stand to inherit the territory within which they reside. Moreover, they form a perfect queue for breeding positions; a queue from which nobody disperses and within which nobody contests. I suggest that queuing might be favoured by selection because it confers a higher probability of attaining breeding status than either dispersing or contesting. This study illustrates that, within animal societies, individuals may tolerate non-breeding positions solely because of their potential to realize benefits in the future.
Studies of the limit order book around large price changes
NASA Astrophysics Data System (ADS)
Tóth, B.; Kertész, J.; Farmer, J. D.
2009-10-01
We study the dynamics of the limit order book of liquid stocks after experiencing large intra-day price changes. In the data we find large variations in several microscopical measures, e.g., the volatility the bid-ask spread, the bid-ask imbalance, the number of queuing limit orders, the activity (number and volume) of limit orders placed and canceled, etc. The relaxation of the quantities is generally very slow that can be described by a power law of exponent ≈ 0.4. We introduce a numerical model in order to understand the empirical results better. We find that with a zero intelligence deposition model of the order flow the empirical results can be reproduced qualitatively. This suggests that the slow relaxations might not be results of agents' strategic behaviour. Studying the difference between the exponents found empirically and numerically helps us to better identify the role of strategic behaviour in the phenomena. in here
NASA Astrophysics Data System (ADS)
Tadić, Bosiljka; Thurner, Stefan; Rodgers, G. J.
2004-03-01
We study the microscopic time fluctuations of traffic load and the global statistical properties of a dense traffic of particles on scale-free cyclic graphs. For a wide range of driving rates R the traffic is stationary and the load time series exhibits antipersistence due to the regulatory role of the superstructure associated with two hub nodes in the network. We discuss how the superstructure affects the functioning of the network at high traffic density and at the jamming threshold. The degree of correlations systematically decreases with increasing traffic density and eventually disappears when approaching a jamming density Rc. Already before jamming we observe qualitative changes in the global network-load distributions and the particle queuing times. These changes are related to the occurrence of temporary crises in which the network-load increases dramatically, and then slowly falls back to a value characterizing free flow.
Factors influencing experience in crowds - The participant perspective.
Filingeri, Victoria; Eason, Ken; Waterson, Patrick; Haslam, Roger
2017-03-01
Humans encounter crowd situations on a daily basis, resulting in both negative and positive experiences. Understanding how to optimise the participant experience of crowds is important. In the study presented in this paper, 5 focus groups were conducted (35 participants, age range: 21-71 years) and 55 crowd situations observed (e.g. transport hubs, sport events, retail situations). Influences on participant experience in crowds identified by the focus groups and observations included: physical design of crowd space and facilities (layout, queuing strategies), crowd movement (monitoring capacity, pedestrian flow), communication and information (signage, wayfinding), comfort and welfare (provision of facilities, environmental comfort), and public order. It was found that important aspects affecting participant experience are often not considered systematically in the planning of events or crowd situations. The findings point to human factors aspects of crowds being overlooked, with the experiences of participants often poor. Copyright © 2016. Published by Elsevier Ltd.
Integrating LMINET with TAAM and SIMMOD: A Feasibility Study
NASA Technical Reports Server (NTRS)
Long, Dou; Stouffer-Coston, Virginia; Kostiuk, Peter; Kula, Richard; Yackovetsky, Robert (Technical Monitor)
2001-01-01
LMINET is a queuing network air traffic simulation model implemented at 64 large airports and the entire National Airspace System in the United States. TAAM and SIMMOD are two widely used air traffic event-driven simulation models mostly for airports. Based on our proposed Progressive Augmented window approach, TAAM and SIMMOD are integrated with LMINET though flight schedules. In the integration, the flight schedules are modified through the flight delays reported by the other models. The benefit to the local simulation study is to let TAAM or SIMMOD take the modified schedule from LMINET, which takes into account of the air traffic congestion and flight delays at the national network level. We demonstrate the value of the integrated models by the case studies at Chicago O'Hare International Airport and Washington Dulles International Airport. Details of the integration are reported and future work for a full-blown integration is identified.
A framework for service enterprise workflow simulation with multi-agents cooperation
NASA Astrophysics Data System (ADS)
Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun
2013-11-01
Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.
Optimizing raid performance with cache
NASA Technical Reports Server (NTRS)
Bouzari, Alex
1994-01-01
We live in a world of increasingly complex applications and operating systems. Information is increasing at a mind-boggling rate. The consolidation of text, voice, and imaging represents an even greater challenge for our information systems. Which forced us to address three important questions: Where do we store all this information? How do we access it? And, how do we protect it against the threat of loss or damage? Introduced in the 1980s, RAID (Redundant Arrays of Independent Disks) represents a cost-effective solution to the needs of the information age. While fulfilling expectations for high storage, and reliability, RAID is sometimes subject to criticisms in the area of performance. However, there are design elements that can significantly enhance performance. They can be subdivided into two areas: (1) RAID levels or basic architecture. And, (2) enhancement schemes such as intelligent caching, support of tagged command queuing, and use of SCSI-2 Fast and Wide features.
Replenishing data descriptors in a DMA injection FIFO buffer
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Cernohous, Bob R [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Kumar, Sameer [White Plains, NY; Parker, Jeffrey J [Rochester, MN
2011-10-11
Methods, apparatus, and products are disclosed for replenishing data descriptors in a Direct Memory Access (`DMA`) injection first-in-first-out (`FIFO`) buffer that include: determining, by a messaging module on an origin compute node, whether a number of data descriptors in a DMA injection FIFO buffer exceeds a predetermined threshold, each data descriptor specifying an application message for transmission to a target compute node; queuing, by the messaging module, a plurality of new data descriptors in a pending descriptor queue if the number of the data descriptors in the DMA injection FIFO buffer exceeds the predetermined threshold; establishing, by the messaging module, interrupt criteria that specify when to replenish the injection FIFO buffer with the plurality of new data descriptors in the pending descriptor queue; and injecting, by the messaging module, the plurality of new data descriptors into the injection FIFO buffer in dependence upon the interrupt criteria.
A Conceptual Framework for Improving Critical Care Patient Flow and Bed Use.
Mathews, Kusum S; Long, Elisa F
2015-06-01
High demand for intensive care unit (ICU) services and limited bed availability have prompted hospitals to address capacity planning challenges. Simulation modeling can examine ICU bed assignment policies, accounting for patient acuity, to reduce ICU admission delays. To provide a framework for data-driven modeling of ICU patient flow, identify key measurable outcomes, and present illustrative analysis demonstrating the impact of various bed allocation scenarios on outcomes. A description of key inputs for constructing a queuing model was outlined, and an illustrative simulation model was developed to reflect current triage protocol within the medical ICU and step-down unit (SDU) at a single tertiary-care hospital. Patient acuity, arrival rate, and unit length of stay, consisting of a "service time" and "time to transfer," were estimated from 12 months of retrospective data (n = 2,710 adult patients) for 36 ICU and 15 SDU staffed beds. Patient priority was based on acuity and whether the patient originated in the emergency department. The model simulated the following hypothetical scenarios: (1) varied ICU/SDU sizes, (2) reserved ICU beds as a triage strategy, (3) lower targets for time to transfer out of the ICU, and (4) ICU expansion by up to four beds. Outcomes included ICU admission wait times and unit occupancy. With current bed allocation, simulated wait time averaged 1.13 (SD, 1.39) hours. Reallocating all SDU beds as ICU decreased overall wait times by 7.2% to 1.06 (SD, 1.39) hours and increased bed occupancy from 80 to 84%. Reserving the last available bed for acute patients reduced wait times for acute patients from 0.84 (SD, 1.12) to 0.31 (SD, 0.30) hours, but tripled subacute patients' wait times from 1.39 (SD, 1.81) to 4.27 (SD, 5.44) hours. Setting transfer times to wards for all ICU/SDU patients to 1 hour decreased wait times for incoming ICU patients, comparable to building one to two additional ICU beds. Hospital queuing and simulation modeling with empiric data inputs can evaluate how changes in ICU bed assignment could impact unit occupancy levels and patient wait times. Trade-offs associated with dedicating resources for acute patients versus expanding capacity for all patients can be examined.
Evaluation of Scheduling Methods for Multiple Runways
NASA Technical Reports Server (NTRS)
Bolender, Michael A.; Slater, G. L.
1996-01-01
Several scheduling strategies are analyzed in order to determine the most efficient means of scheduling aircraft when multiple runways are operational and the airport is operating at different utilization rates. The study compares simulation data for two and three runway scenarios to results from queuing theory for an M/D/n queue. The direction taken, however, is not to do a steady-state, or equilibrium, analysis since this is not the case during a rush period at a typical airport. Instead, a transient analysis of the delay per aircraft is performed. It is shown that the scheduling strategy that reduces the delay depends upon the density of the arrival traffic. For light traffic, scheduling aircraft to their preferred runways is sufficient; however, as the arrival rate increases, it becomes more important to separate traffic by weight class. Significant delay reduction is realized when aircraft that belong to the heavy and small weight classes are sent to separate runways with large aircraft put into the 'best' landing slot.
Scalability Analysis and Use of Compression at the Goddard DAAC and End-to-End MODIS Transfers
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.
1998-01-01
The goal of this task is to analyze the performance of single and multiple FTP transfer between SCF's and the Goddard DAAC. We developed an analytic model to compute the performance of FTP sessions as a function of various key parameters, implemented the model as a program called FTP Analyzer, and carried out validations with real data obtained by running single and multiple FTP transfer between GSFC and the Miami SCF. The input parameters to the model include the mix to FTP sessions (scenario), and for each FTP session, the file size. The network parameters include the round trip time, packet loss rate, the limiting bandwidth of the network connecting the SCF to a DAAC, TCP's basic timeout, TCP's Maximum Segment Size, and TCP's Maximum Receiver's Window Size. The modeling approach used consisted of modeling TCP's overall throughput, computing TCP's delay per FTP transfer, and then solving a queuing network model that includes the FTP clients and servers.
Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard
2003-12-01
Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.
NASA Astrophysics Data System (ADS)
Gowrishankar, Lavanya; Bhaskar, Vidhyacharan; Sundarammal, K.
2018-04-01
The developed model comprises of a single server capable of handling two different job types X and Y type job. Job Y takes more time for execution than job X. The objective is to construct a single server which would replace the standard M/M/2 queuing model The method used to find the relative measures involves the cost equation. The properties of the service distribution are discussed in detail. The maximum likelihood estimates for the parameters are obtained. The results are analytically derived for the M/Geo[xy]/1 model. A comparison is done between the model proposed and the standard M/M/2 queue. From the numerical results, it is observed that the waiting time in queue increases as the number of cycles is increased but however it is more economical than the M/M/2 model with restriction on the number of time slices.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
MOCAT: A Metagenomics Assembly and Gene Prediction Toolkit
Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R.; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/. PMID:23082188
Performance modeling for large database systems
NASA Astrophysics Data System (ADS)
Schaar, Stephen; Hum, Frank; Romano, Joe
1997-02-01
One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.
Wide-area-distributed storage system for a multimedia database
NASA Astrophysics Data System (ADS)
Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro
1998-12-01
We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.
GPUs in a computational physics course
NASA Astrophysics Data System (ADS)
Adler, Joan; Nissim, Gal; Kiswani, Ahmad
2017-10-01
In an introductory computational physics class of the type that many of us give, time constraints lead to hard choices on topics. Everyone likes to include their own research in such a class but an overview of many areas is paramount. Parallel programming algorithms using MPI is one important topic. Both the principle and the need to break the “fear barrier” of using a large machine with a queuing system via ssh must be sucessfully passed on. Due to the plateau in chip development and to power considerations future HPC hardware choices will include heavy use of GPUs. Thus the need to introduce these at the level of an introductory course has arisen. Just as for parallel coding, explanation of the benefits and simple examples to guide the hesitant first time user should be selected. Several student projects using GPUs that include how-to pages were proposed at the Technion. Two of the more successful ones were lattice Boltzmann and a finite element code, and we present these in detail.
An analytical study of various telecomminication networks using markov models
NASA Astrophysics Data System (ADS)
Ramakrishnan, M.; Jayamani, E.; Ezhumalai, P.
2015-04-01
The main aim of this paper is to examine issues relating to the performance of various Telecommunication networks, and applied queuing theory for better design and improved efficiency. Firstly, giving an analytical study of queues deals with quantifying the phenomenon of waiting lines using representative measures of performances, such as average queue length (on average number of customers in the queue), average waiting time in queue (on average time to wait) and average facility utilization (proportion of time the service facility is in use). In the second, using Matlab simulator, summarizes the finding of the investigations, from which and where we obtain results and describing methodology for a) compare the waiting time and average number of messages in the queue in M/M/1 and M/M/2 queues b) Compare the performance of M/M/1 and M/D/1 queues and study the effect of increasing the number of servers on the blocking probability M/M/k/k queue model.
Efficient Access to Massive Amounts of Tape-Resident Data
NASA Astrophysics Data System (ADS)
Yu, David; Lauret, Jérôme
2017-10-01
Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Raffenetti, C.
NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less
TinyOS-based quality of service management in wireless sensor networks
Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.
2009-01-01
Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.
Quality of service routing in wireless ad hoc networks
NASA Astrophysics Data System (ADS)
Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh
2003-08-01
An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.
NASA Astrophysics Data System (ADS)
Ismail, Zurina; Shokor, Shahrul Suhaimi AB
2016-03-01
Rapid life time change of the Malaysian lifestyle had served the overwhelming growth in the service operation industry. On that occasion, this paper will provide the idea to improve the waiting line system (WLS) practices in Malaysia fast food chains. The study will compare the results in between the single server single phase (SSSP) and the single server multi-phase (SSMP) which providing Markovian Queuing (MQ) to be used for analysis. The new system will improve the current WLS, plus intensifying the organization performance. This new WLS were designed and tested in a real case scenario and in order to develop and implemented the new styles, it need to be focusing on the average number of customers (ANC), average number of customer spending time waiting in line (ACS), and the average time customers spend in waiting and being served (ABS). We introduced new WLS design and there will be prompt discussion upon theories of benefits and potential issues that will benefit other researchers.
Popp, Michael P.; Searcy, Stephen S.; Sokhansanj, Shahab; ...
2015-03-25
To determine the effects of weather on harvested moisture content (MC) of switchgrass (Panicum virgatum) and energy sorghum (Sorghum bicolor), tracking of harvest progress on individual fields in the Integrated Biomass Supply and Logistics (IBSAL) model was modified to allow: i) rewetting of swathed material in the drying formulae; and ii) field queuing rules based on equipment availability and weather. Estimated crop yield and initial MC by harvest date, as observed in field trials, along with the modeling of different delays between mowing and harvest allowed estimation of harvested MC, annual tonnage processed and associated processing cost differences by cropmore » and location over 10 years. Extending the hours of annual equipment use had minor implications on cost of production. Energy sorghum proved difficult to dry in the field. Its higher yield, leading to shorter supply distance to the plant, may justify harvesting of energy sorghum early in the season with drier weather. Lastly, later harvest for lower-yielding switchgrass offers MC advantages.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popp, Michael P.; Searcy, Stephen S.; Sokhansanj, Shahab
To determine the effects of weather on harvested moisture content (MC) of switchgrass (Panicum virgatum) and energy sorghum (Sorghum bicolor), tracking of harvest progress on individual fields in the Integrated Biomass Supply and Logistics (IBSAL) model was modified to allow: i) rewetting of swathed material in the drying formulae; and ii) field queuing rules based on equipment availability and weather. Estimated crop yield and initial MC by harvest date, as observed in field trials, along with the modeling of different delays between mowing and harvest allowed estimation of harvested MC, annual tonnage processed and associated processing cost differences by cropmore » and location over 10 years. Extending the hours of annual equipment use had minor implications on cost of production. Energy sorghum proved difficult to dry in the field. Its higher yield, leading to shorter supply distance to the plant, may justify harvesting of energy sorghum early in the season with drier weather. Lastly, later harvest for lower-yielding switchgrass offers MC advantages.« less
MOCAT: a metagenomics assembly and gene prediction toolkit.
Kultima, Jens Roat; Sunagawa, Shinichi; Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
A new task scheduling algorithm based on value and time for cloud platform
NASA Astrophysics Data System (ADS)
Kuang, Ling; Zhang, Lichen
2017-08-01
Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its characters and advantages.
NASA Astrophysics Data System (ADS)
Tamazian, A.; Nguyen, V. D.; Markelov, O. A.; Bogachev, M. I.
2016-07-01
We suggest a universal phenomenological description for the collective access patterns in the Internet traffic dynamics both at local and wide area network levels that takes into account erratic fluctuations imposed by cooperative user behaviour. Our description is based on the superstatistical approach and leads to the q-exponential inter-session time and session size distributions that are also in perfect agreement with empirical observations. The validity of the proposed description is confirmed explicitly by the analysis of complete 10-day traffic traces from the WIDE backbone link and from the local campus area network downlink from the Internet Service Provider. Remarkably, the same functional forms have been observed in the historic access patterns from single WWW servers. The suggested approach effectively accounts for the complex interplay of both “calm” and “bursty” user access patterns within a single-model setting. It also provides average sojourn time estimates with reasonable accuracy, as indicated by the queuing system performance simulation, this way largely overcoming the failure of Poisson modelling of the Internet traffic dynamics.
Ethical considerations in resource allocation in a cochlear implant program.
Westerberg, Brian D; Pijl, Sipke; McDonald, Michael
2008-04-01
To review processes of resource allocation and the ethical considerations relevant to the fair allocation of a limited number of cochlear implants to increasing numbers of potential recipients. Review of relevant considerations. Tertiary referral hospital. Editorial discussion of the ethical issues of resource allocation. Heterogeneity of audiometric thresholds, self-reported disability of hearing loss, age of the potential cochlear implant recipient, cost-effectiveness, access to resources, compliance with follow-up, social support available to the recipient, social consequences of hearing impairment, and other recipient-related factors. In a publicly funded health care system, there will always be a need for decision-making processes for allocation of finite fiscal resources. All candidates for cochlear implantation deserve fair consideration. However, they are a heterogeneous group in terms of needs and expected outcomes consisting of traditional and marginal candidates, with a wide range of benefit from acoustic amplification. We argue that implant programs should thoughtfully prioritize treatment on the basis of need and potential benefit. We reject queuing on the basis of "first-come, first-served" or on the basis of perceived social worth.
A heuristic method for consumable resource allocation in multi-class dynamic PERT networks
NASA Astrophysics Data System (ADS)
Yaghoubi, Saeed; Noori, Siamak; Mazdeh, Mohammad Mahdavi
2013-06-01
This investigation presents a heuristic method for consumable resource allocation problem in multi-class dynamic Project Evaluation and Review Technique (PERT) networks, where new projects from different classes (types) arrive to system according to independent Poisson processes with different arrival rates. Each activity of any project is operated at a devoted service station located in a node of the network with exponential distribution according to its class. Indeed, each project arrives to the first service station and continues its routing according to precedence network of its class. Such system can be represented as a queuing network, while the discipline of queues is first come, first served. On the basis of presented method, a multi-class system is decomposed into several single-class dynamic PERT networks, whereas each class is considered separately as a minisystem. In modeling of single-class dynamic PERT network, we use Markov process and a multi-objective model investigated by Azaron and Tavakkoli-Moghaddam in 2007. Then, after obtaining the resources allocated to service stations in every minisystem, the final resources allocated to activities are calculated by the proposed method.
Leveraging human decision making through the optimal management of centralized resources
NASA Astrophysics Data System (ADS)
Hyden, Paul; McGrath, Richard G.
2016-05-01
Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.
Transient probabilities for queues with applications to hospital waiting list management.
Joy, Mark; Jones, Simon
2005-08-01
In this paper we study queuing systems within the NHS. Recently imposed government performance targets lead NHS executives to investigate and instigate alternative management strategies, thereby imposing structural changes on the queues. Under such circumstances, it is most unlikely that such systems are in equilibrium. It is crucial, in our opinion, to recognise this state of affairs in order to make a balanced assessment of the role of queue management in the modern NHS. From a mathematical perspective it should be emphasised that measures of the state of a queue based upon the assumption of statistical equilibrium (a pervasive methodology in the study of queues) are simply wrong in the above scenario. To base strategic decisions around such ideas is therefore highly questionable and it is one of the purposes of this paper to offer alternatives: we present some (recent) research whose results generate performance measures and measures of risk, for example, of waiting-times growing unacceptably large; we emphasise that these results concern the transient behaviour of the queueing model-there is no asssumption of statistical equilibrium. We also demonstrate that our results are computationally tractable.
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
Duda, Catherine; Rajaram, Kumar; Barz, Christiane; Rosenthal, J Thomas
2013-01-01
There has been an increasing emphasis on health care efficiency and costs and on improving quality in health care settings such as hospitals or clinics. However, there has not been sufficient work on methods of improving access and customer service times in health care settings. The study develops a framework for improving access and customer service time for health care settings. In the framework, the operational concept of the bottleneck is synthesized with queuing theory to improve access and reduce customer service times without reduction in clinical quality. The framework is applied at the Ronald Reagan UCLA Medical Center to determine the drivers for access and customer service times and then provides guidelines on how to improve these drivers. Validation using simulation techniques shows significant potential for reducing customer service times and increasing access at this institution. Finally, the study provides several practice implications that could be used to improve access and customer service times without reduction in clinical quality across a range of health care settings from large hospitals to small community clinics.
Reinventing Emergency Department Flow via Healthcare Delivery Science.
DeFlitch, Christopher; Geeting, Glenn; Paz, Harold L
2015-01-01
Healthcare system flow resulting in emergency departments (EDs) crowding is a quality and access problem. This case study examines an overcrowded academic health center ED with increasing patient volumes and limited physical space for expansion. ED capacity and efficiency improved via engineering principles application, addressing patient and staffing flows, and reinventing the delivery model. Using operational data and staff input, patient and staff flow models were created, identifying bottlenecks (points of inefficiency). A new flow model of emergency care delivery, physician-directed queuing, was developed. Expanding upon physicians in triage, providers passively evaluate all patients upon arrival, actively manage patients requiring fewer resources, and direct patients requiring complex resources to further evaluation in ED areas. Sustained over time, ED efficiency improved as measured by near elimination of "left without being seen" patients and waiting times with improvement in door to doctor, patient satisfaction, and total length of stay. All improvements were in the setting on increased patient volume and no increase in physician staffing. Our experience suggests that practical application of healthcare delivery science can be used to improve ED efficiency. © The Author(s) 2015.
The origin of bursts and heavy tails in human dynamics.
Barabási, Albert-László
2005-05-12
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics. These finding have important implications, ranging from resource management to service allocation, in both communications and retail.
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
Role of optimization in the human dynamics of task execution
NASA Astrophysics Data System (ADS)
Cajueiro, Daniel O.; Maldonado, Wilfredo L.
2008-03-01
In order to explain the empirical evidence that the dynamics of human activity may not be well modeled by Poisson processes, a model based on queuing processes was built in the literature [A. L. Barabasi, Nature (London) 435, 207 (2005)]. The main assumption behind that model is that people execute their tasks based on a protocol that first executes the high priority item. In this context, the purpose of this paper is to analyze the validity of that hypothesis assuming that people are rational agents that make their decisions in order to minimize the cost of keeping nonexecuted tasks on the list. Therefore, we build and analytically solve a dynamic programming model with two priority types of tasks and show that the validity of this hypothesis depends strongly on the structure of the instantaneous costs that a person has to face if a given task is kept on the list for more than one period. Moreover, one interesting finding is that in one of the situations the protocol used to execute the tasks generates complex one-dimensional dynamics.
Biomass Supply Logistics and Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, Shahabaddine
2009-04-01
Feedstock supply system encompasses numerous unit operations necessary to move lignocellulosic feedstock from the place where it is produced (in the field or on the stump) to the start of the conversion process (reactor throat) of the Biorefinery. These unit operations, which include collection, storage, preprocessing, handling, and transportation, represent one of the largest technical and logistics challenges to the emerging lignocellulosic biorefining industry. This chapter briefly reviews methods of estimating the quantities of biomass followed by harvesting and collection processes based on current practices on handling wet and dry forage materials. Storage and queuing are used to deal withmore » seasonal harvest times, variable yields, and delivery schedules. Preprocessing can be as simple as grinding and formatting the biomass for increased bulk density or improved conversion efficiency, or it can be as complex as improving feedstock quality through fractionation, tissue separation, drying, blending, and densification. Handling and Transportation consists of using a variety of transport equipment (truck, train, ship) for moving the biomass from one point to another. The chapter also provides typical cost figures for harvest and processing of biomass.« less
Networks for Autonomous Formation Flying Satellite Systems
NASA Technical Reports Server (NTRS)
Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.
2001-01-01
The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.
Wingfield, Jenna L; Mengoni, Ilaria; Bomberger, Heather; Jiang, Yu-Yang; Walsh, Jonathon D; Brown, Jason M; Picariello, Tyler; Cochran, Deborah A; Zhu, Bing; Pan, Junmin; Eggenschwiler, Jonathan; Gaertig, Jacek; Witman, George B; Kner, Peter; Lechtreck, Karl
2017-01-01
Intraflagellar transport (IFT) trains, multimegadalton assemblies of IFT proteins and motors, traffic proteins in cilia. To study how trains assemble, we employed fluorescence protein-tagged IFT proteins in Chlamydomonas reinhardtii. IFT-A and motor proteins are recruited from the cell body to the basal body pool, assembled into trains, move through the cilium, and disperse back into the cell body. In contrast to this ‘open’ system, IFT-B proteins from retrograde trains reenter the pool and a portion is reused directly in anterograde trains indicating a ‘semi-open’ system. Similar IFT systems were also observed in Tetrahymena thermophila and IMCD3 cells. FRAP analysis indicated that IFT proteins and motors of a given train are sequentially recruited to the basal bodies. IFT dynein and tubulin cargoes are loaded briefly before the trains depart. We conclude that the pool contains IFT trains in multiple stages of assembly queuing for successive release into the cilium upon completion. DOI: http://dx.doi.org/10.7554/eLife.26609.001 PMID:28562242
Understanding the heavy-tailed dynamics in human behavior
NASA Astrophysics Data System (ADS)
Ross, Gordon J.; Jones, Tim
2015-06-01
The recent availability of electronic data sets containing large volumes of communication data has made it possible to study human behavior on a larger scale than ever before. From this, it has been discovered that across a diverse range of data sets, the interevent times between consecutive communication events obey heavy-tailed power law dynamics. Explaining this has proved controversial, and two distinct hypotheses have emerged. The first holds that these power laws are fundamental, and arise from the mechanisms such as priority queuing that humans use to schedule tasks. The second holds that they are statistical artifacts which only occur in aggregated data when features such as circadian rhythms and burstiness are ignored. We use a large social media data set to test these hypotheses, and find that although models that incorporate circadian rhythms and burstiness do explain part of the observed heavy tails, there is residual unexplained heavy-tail behavior which suggests a more fundamental cause. Based on this, we develop a quantitative model of human behavior which improves on existing approaches and gives insight into the mechanisms underlying human interactions.
Biomass supply logistics and infrastructure.
Sokhansanj, Shahabaddine; Hess, J Richard
2009-01-01
Feedstock supply system encompasses numerous unit operations necessary to move lignocellulosic feedstock from the place where it is produced (in the field or on the stump) to the start of the conversion process (reactor throat) of the biorefinery. These unit operations, which include collection, storage, preprocessing, handling, and transportation, represent one of the largest technical and logistics challenges to the emerging lignocellulosic biorefining industry. This chapter briefly reviews the methods of estimating the quantities of biomass, followed by harvesting and collection processes based on current practices on handling wet and dry forage materials. Storage and queuing are used to deal with seasonal harvest times, variable yields, and delivery schedules. Preprocessing can be as simple as grinding and formatting the biomass for increased bulk density or improved conversion efficiency, or it can be as complex as improving feedstock quality through fractionation, tissue separation, drying, blending, and densification. Handling and transportation consists of using a variety of transport equipment (truck, train, ship) for moving the biomass from one point to another. The chapter also provides typical cost figures for harvest and processing of biomass.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
Latent heat of traffic moving from rest
NASA Astrophysics Data System (ADS)
Farzad Ahmadi, S.; Berrier, Austin S.; Doty, William M.; Greer, Pat G.; Habibi, Mohammad; Morgan, Hunter A.; Waterman, Josam H. C.; Abaid, Nicole; Boreyko, Jonathan B.
2017-11-01
Contrary to traditional thinking and driver intuition, here we show that there is no benefit to ground vehicles increasing their packing density at stoppages. By systematically controlling the packing density of vehicles queued at a traffic light on a Smart Road, drone footage revealed that the benefit of an initial increase in displacement for close-packed vehicles is completely offset by the lag time inherent to changing back into a ‘liquid phase’ when flow resumes. This lag is analogous to the thermodynamic concept of the latent heat of fusion, as the ‘temperature’ (kinetic energy) of the vehicles cannot increase until the traffic ‘melts’ into the liquid phase. These findings suggest that in situations where gridlock is not an issue, drivers should not decrease their spacing during stoppages in order to lessen the likelihood of collisions with no loss in flow efficiency. In contrast, motion capture experiments of a line of people walking from rest showed higher flow efficiency with increased packing densities, indicating that the importance of latent heat becomes trivial for slower moving systems.
Wake Vortex Advisory System (WakeVAS) Evaluation of Impacts on the National Airspace System
NASA Technical Reports Server (NTRS)
Smith, Jeremy C.; Dollyhigh, Samuel M.
2005-01-01
This report is one of a series that describes an ongoing effort in high-fidelity modeling/simulation, evaluation and analysis of the benefits and performance metrics of the Wake Vortex Advisory System (WakeVAS) Concept of Operations being developed as part of the Virtual Airspace Modeling and Simulation (VAMS) project. A previous study, determined the overall increases in runway arrival rates that could be achieved at 12 selected airports due to WakeVAS reduced aircraft spacing under Instrument Meteorological Conditions. This study builds on the previous work to evaluate the NAS wide impacts of equipping various numbers of airports with WakeVAS. A queuing network model of the National Airspace System, built by the Logistics Management Institute, Mclean, VA, for NASA (LMINET) was used to estimate the reduction in delay that could be achieved by using WakeVAS under non-visual meteorological conditions for the projected air traffic demand in 2010. The results from LMINET were used to estimate the total annual delay reduction that could be achieved and from this, an estimate of the air carrier variable operating cost saving was made.
An efficiency improvement in warehouse operation using simulation analysis
NASA Astrophysics Data System (ADS)
Samattapapong, N.
2017-11-01
In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.
Cell transmission model of dynamic assignment for urban rail transit networks.
Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian
2017-01-01
For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.
Estimation of number of fatalities caused by toxic gases due to fire in road tunnels.
Qu, Xiaobo; Meng, Qiang; Liu, Zhiyuan
2013-01-01
The quantitative risk assessment (QRA) is one of the explicit requirements under the European Union (EU) Directive (2004/54/EC). As part of this, it is essential to be able to estimate the number of fatalities in different accident scenarios. In this paper, a tangible methodology is developed to estimate the number of fatalities caused by toxic gases due to fire in road tunnels by incorporating traffic flow and the spread of fire in tunnels. First, a deterministic queuing model is proposed to calculate the number of people at risk, by taking into account tunnel geometry, traffic flow patterns, and incident response plans for road tunnels. Second, the Fire Dynamics Simulator (FDS) is used to obtain the temperature and concentrations of CO, CO(2), and O(2). By taking advantage of the additivity of the fractional effective dose (FED) method, fatality rates for different locations in given time periods can be estimated. An illustrative case study is carried out to demonstrate the applicability of the proposed methodology. Copyright © 2012 Elsevier Ltd. All rights reserved.
Evolving Requirements for Magnetic Tape Data Storage Systems
NASA Technical Reports Server (NTRS)
Gniewek, John J.
1996-01-01
Magnetic tape data storage systems have evolved in an environment where the major applications have been back-up/restore, disaster recovery, and long term archive. Coincident with the rapidly improving price-performance of disk storage systems, the prime requirements for tape storage systems have remained: (1) low cost per MB, (2) a data rate balanced to the remaining system components. Little emphasis was given to configuring the technology components to optimize retrieval of the stored data. Emerging new applications such as network attached high speed memory (HSM), and digital libraries, place additional emphasis and requirements on the retrieval of the stored data. It is therefore desirable to consider the system to be defined both by STorage And Retrieval System (STARS) requirements. It is possible to provide comparative performance analysis of different STARS by incorporating parameters related to (1) device characteristics, and (2) application characteristics in combination with queuing theory analysis. Results of these analyses are presented here in the form of response time as a function of system configuration for two different types of devices and for a variety of applications.
Generating a Corpus of Mobile Forensic Images for Masquerading user Experimentation.
Guido, Mark; Brooks, Marc; Grover, Justin; Katz, Eric; Ondricek, Jared; Rogers, Marcus; Sharpe, Lauren
2016-11-01
The Periodic Mobile Forensics (PMF) system investigates user behavior on mobile devices. It applies forensic techniques to an enterprise mobile infrastructure, utilizing an on-device agent named TractorBeam. The agent collects changed storage locations for later acquisition, reconstruction, and analysis. TractorBeam provides its data to an enterprise infrastructure that consists of a cloud-based queuing service, relational database, and analytical framework for running forensic processes. During a 3-month experiment with Purdue University, TractorBeam was utilized in a simulated operational setting across 34 users to evaluate techniques to identify masquerading users (i.e., users other than the intended device user). The research team surmises that all masqueraders are undesirable to an enterprise, even when a masquerader lacks malicious intent. The PMF system reconstructed 821 forensic images, extracted one million audit events, and accurately detected masqueraders. Evaluation revealed that developed methods reduced storage requirements 50-fold. This paper describes the PMF architecture, performance of TractorBeam throughout the protocol, and results of the masquerading user analysis. © 2016 American Academy of Forensic Sciences.
Automated observatory in Antarctica: real-time data transfer on constrained networks in practice
NASA Astrophysics Data System (ADS)
Bracke, Stephan; Gonsette, Alexandre; Rasson, Jean; Poncelet, Antoine; Hendrickx, Olivier
2017-08-01
In 2013 a project was started by the geophysical centre in Dourbes to install a fully automated magnetic observatory in Antarctica. This isolated place comes with specific requirements: unmanned station during 6 months, low temperatures with extreme values down to -50 °C, minimum power consumption and satellite bandwidth limited to 56 Kbit s-1. The ultimate aim is to transfer real-time magnetic data every second: vector data from a LEMI-25 vector magnetometer, absolute F measurements from a GEM Systems scalar proton magnetometer and absolute magnetic inclination-declination (DI) measurements (five times a day) with an automated DI-fluxgate magnetometer. Traditional file transfer protocols (for instance File Transfer Protocol (FTP), email, rsync) show severe limitations when it comes to real-time capability. After evaluation of pro and cons of the available real-time Internet of things (IoT) protocols and seismic software solutions, we chose to use Message Queuing Telemetry Transport (MQTT) and receive the 1 s data with a negligible latency cost and no loss of data. Each individual instrument sends the magnetic data immediately after capturing, and the data arrive approximately 300 ms after being sent, which corresponds with the normal satellite latency.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition.
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Modeling service time reliability in urban ferry system
NASA Astrophysics Data System (ADS)
Chen, Yifan; Luo, Sida; Zhang, Mengke; Shen, Hanxia; Xin, Feifei; Luo, Yujie
2017-09-01
The urban ferry system can carry a large number of travelers, which may alleviate the pressure on road traffic. As an indicator of its service quality, service time reliability (STR) plays an essential part in attracting travelers to the ferry system. A wide array of studies have been conducted to analyze the STR of land transportation. However, the STR of ferry systems has received little attention in the transportation literature. In this study, a model was established to obtain the STR in urban ferry systems. First, the probability density function (PDF) of the service time provided by ferry systems was constructed. Considering the deficiency of the queuing theory, this PDF was determined by Bayes’ theorem. Then, to validate the function, the results of the proposed model were compared with those of the Monte Carlo simulation. With the PDF, the reliability could be determined mathematically by integration. Results showed how the factors including the frequency, capacity, time schedule and ferry waiting time affected the STR under different degrees of congestion in ferry systems. Based on these results, some strategies for improving the STR were proposed. These findings are of great significance to increasing the share of ferries among various urban transport modes.
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization
Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748
Performance Evaluation Model for Application Layer Firewalls.
Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan
2016-01-01
Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.
Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition
NASA Astrophysics Data System (ADS)
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Computer models of social processes: the case of migration.
Beshers, J M
1967-06-01
The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.
The design of traffic signal coordinated control
NASA Astrophysics Data System (ADS)
Guo, Xueting; Sun, Hongsheng; Wang, Xifu
2017-05-01
Traffic as the tertiary industry is an important pillar industry to support the normal development of the economy. But now China's road traffic development and economic development has shown a great imbalance and fault phenomenon, which greatly inhibited the normal development of China's economy. Now in many large and medium-sized cities in China are implementing green belt construction. The so-called green band is when the road conditions to meet the conditions for the establishment of the green band, the sections of the intersection of several planning to a traffic coordination control system, so that when the driver at a specific speed can be achieved without stopping the continuous Through the intersection. Green belt can effectively reduce the delay and queuing length of vehicle driving, the normal function of urban roads and reduce the economic losses caused by traffic congestion is a great help. In this paper, the theoretical basis of the design of the coordinated control system is described. Secondly, the green time offset is calculated by the analytic method and the green band is established. And then the VISSIM software is used to simulate the traffic system before and after the improvement. Finally, the results of the two simulations are compared.
Cargo container inspection test program at ARPA's Nonintrusive Inspection Technology Testbed
NASA Astrophysics Data System (ADS)
Volberding, Roy W.; Khan, Siraj M.
1994-10-01
An x-ray-based cargo inspection system test program is being conducted at the Advanced Research Project Agency (ARPA)-sponsored Nonintrusive Inspection Technology Testbed (NITT) located in the Port of Tacoma, Washington. The test program seeks to determine the performance that can be expected from a dual, high-energy x-ray cargo inspection system when inspecting ISO cargo containers. This paper describes an intensive, three-month, system test involving two independent test groups, one representing the criminal smuggling element and the other representing the law enforcement community. The first group, the `Red Team', prepares ISO containers for inspection at an off-site facility. An algorithm randomly selects and indicates the positions and preparation of cargoes within a container. The prepared container is dispatched to the NITT for inspection by the `Blue Team'. After in-gate processing, it is queued for examination. The Blue Team inspects the container and decides whether or not to pass the container. The shipment undergoes out-gate processing and returns to the Red Team. The results of the inspection are recorded for subsequent analysis. The test process, including its governing protocol, the cargoes, container preparation, the examination and results available at the time of submission are presented.
Queuing theory models used for port equipment sizing
NASA Astrophysics Data System (ADS)
Dragu, V.; Dinu, O.; Ruscă, A.; Burciu, Ş.; Roman, E. A.
2017-08-01
The significant growth of volumes and distances on road transportation led to the necessity of finding solutions to increase water transportation market share together with the handling and transfer technologies within its terminals. It is widely known that the biggest times are consumed within the transport terminals (loading/unloading/transfer) and so the necessity of constantly developing handling techniques and technologies in concordance with the goods flows size so that the total waiting time of ships within ports is reduced. Port development should be achieved by harmonizing the contradictory interests of port administration and users. Port administrators aim profit increase opposite to users that want savings by increasing consumers’ surplus. The difficulty consists in the fact that the transport demand - supply equilibrium must be realised at costs and goods quantities transiting the port in order to satisfy the interests of both parties involved. This paper presents a port equipment sizing model by using queueing theory so that the sum of costs for ships waiting operations and equipment usage would be minimum. Ship operation within the port is assimilated to a mass service waiting system in which parameters are later used to determine the main costs for ships and port equipment.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
Achieving fast and stable failure detection in WDM Networks
NASA Astrophysics Data System (ADS)
Gao, Donghui; Zhou, Zhiyu; Zhang, Hanyi
2005-02-01
In dynamic networks, the failure detection time takes a major part of the convergence time, which is an important network performance index. To detect a node or link failure in the network, traditional protocols, like Hello protocol in OSPF or RSVP, exchanges keep-alive messages between neighboring nodes to keep track of the link/node state. But by default settings, it can get a minimum detection time in the measure of dozens of seconds, which can not meet the demands of fast network convergence and failure recovery. When configuring the related parameters to reduce the detection time, there will be notable instability problems. In this paper, we analyzed the problem and designed a new failure detection algorithm to reduce the network overhead of detection signaling. Through our experiment we found it is effective to enhance the stability by implicitly acknowledge other signaling messages as keep-alive messages. We conducted our proposal and the previous approaches on the ASON test-bed. The experimental results show that our algorithm gives better performances than previous schemes in about an order magnitude reduction of both false failure alarms and queuing delay to other messages, especially under light traffic load.
Highball: A high speed, reserved-access, wide area network
NASA Technical Reports Server (NTRS)
Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.
1990-01-01
A network architecture called Highball and a preliminary design for a prototype, wide-area data network designed to operate at speeds of 1 Gbps and beyond are described. It is intended for applications requiring high speed burst transmissions where some latency between requesting a transmission and granting the request can be anticipated and tolerated. Examples include real-time video and disk-disk transfers, national filestore access, remote sensing, and similar applications. The network nodes include an intelligent crossbar switch, but have no buffering capabilities; thus, data must be queued at the end nodes. There are no restrictions on the network topology, link speeds, or end-end protocols. The end system, nodes, and links can operate at any speed up to the limits imposed by the physical facilities. An overview of an initial design approach is presented and is intended as a benchmark upon which a detailed design can be developed. It describes the network architecture and proposed access protocols, as well as functional descriptions of the hardware and software components that could be used in a prototype implementation. It concludes with a discussion of additional issues to be resolved in continuing stages of this project.
Decentralized Hypothesis Testing in Energy Harvesting Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Tarighati, Alla; Gross, James; Jalden, Joakim
2017-09-01
We consider the problem of decentralized hypothesis testing in a network of energy harvesting sensors, where sensors make noisy observations of a phenomenon and send quantized information about the phenomenon towards a fusion center. The fusion center makes a decision about the present hypothesis using the aggregate received data during a time interval. We explicitly consider a scenario under which the messages are sent through parallel access channels towards the fusion center. To avoid limited lifetime issues, we assume each sensor is capable of harvesting all the energy it needs for the communication from the environment. Each sensor has an energy buffer (battery) to save its harvested energy for use in other time intervals. Our key contribution is to formulate the problem of decentralized detection in a sensor network with energy harvesting devices. Our analysis is based on a queuing-theoretic model for the battery and we propose a sensor decision design method by considering long term energy management at the sensors. We show how the performance of the system changes for different battery capacities. We then numerically show how our findings can be used in the design of sensor networks with energy harvesting sensors.
Advanced access: reducing waiting and delays in primary care.
Murray, Mark; Berwick, Donald M
2003-02-26
Delay of care is a persistent and undesirable feature of current health care systems. Although delay seems to be inevitable and linked to resource limitations, it often is neither. Rather, it is usually the result of unplanned, irrational scheduling and resource allocation. Application of queuing theory and principles of industrial engineering, adapted appropriately to clinical settings, can reduce delay substantially, even in small practices, without requiring additional resources. One model, sometimes referred to as advanced access, has increasingly been shown to reduce waiting times in primary care. The core principle of advanced access is that patients calling to schedule a physician visit are offered an appointment the same day. Advanced access is not sustainable if patient demand for appointments is permanently greater than physician capacity to offer appointments. Six elements of advanced access are important in its application balancing supply and demand, reducing backlog, reducing the variety of appointment types, developing contingency plans for unusual circumstances, working to adjust demand profiles, and increasing the availability of bottleneck resources. Although these principles are powerful, they are counter to deeply held beliefs and established practices in health care organizations. Adopting these principles requires strong leadership investment and support.
Yuzenkova, Yulia; Gamba, Pamela; Herber, Martijn; Attaiech, Laetitia; Shafeeq, Sulman; Kuipers, Oscar P; Klumpp, Stefan; Zenkin, Nikolay; Veening, Jan-Willem
2014-01-01
Transcription by RNA polymerase may be interrupted by pauses caused by backtracking or misincorporation that can be resolved by the conserved bacterial Gre-factors. However, the consequences of such pausing in the living cell remain obscure. Here, we developed molecular biology and transcriptome sequencing tools in the human pathogen Streptococcus pneumoniae and provide evidence that transcription elongation is rate-limiting on highly expressed genes. Our results suggest that transcription elongation may be a highly regulated step of gene expression in S. pneumoniae. Regulation is accomplished via long-living elongation pauses and their resolution by elongation factor GreA. Interestingly, mathematical modeling indicates that long-living pauses cause queuing of RNA polymerases, which results in 'transcription traffic jams' on the gene and thus blocks its expression. Together, our results suggest that long-living pauses and RNA polymerase queues caused by them are a major problem on highly expressed genes and are detrimental for cell viability. The major and possibly sole function of GreA in S. pneumoniae is to prevent formation of backtracked elongation complexes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Hotspot Identification for Shanghai Expressways Using the Quantitative Risk Assessment Method
Chen, Can; Li, Tienan; Sun, Jian; Chen, Feng
2016-01-01
Hotspot identification (HSID) is the first and key step of the expressway safety management process. This study presents a new HSID method using the quantitative risk assessment (QRA) technique. Crashes that are likely to happen for a specific site are treated as the risk. The aggregation of the crash occurrence probability for all exposure vehicles is estimated based on the empirical Bayesian method. As for the consequences of crashes, crashes may not only cause direct losses (e.g., occupant injuries and property damages) but also result in indirect losses. The indirect losses are expressed by the extra delays calculated using the deterministic queuing diagram method. The direct losses and indirect losses are uniformly monetized to be considered as the consequences of this risk. The potential costs of crashes, as a criterion to rank high-risk sites, can be explicitly expressed as the sum of the crash probability for all passing vehicles and the corresponding consequences of crashes. A case study on the urban expressways of Shanghai is presented. The results show that the new QRA method for HSID enables the identification of a set of high-risk sites that truly reveal the potential crash costs to society. PMID:28036009
Real-Time Data Processing in the muon system of the D0 detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neeti Parashar et al.
2001-07-03
This paper presents a real-time application of the 16-bit fixed point Digital Signal Processors (DSPs), in the Muon System of the D0 detector located at the Fermilab Tevatron, presently the world's highest-energy hadron collider. As part of the Upgrade for a run beginning in the year 2000, the system is required to process data at an input event rate of 10 KHz without incurring significant deadtime in readout. The ADSP21csp01 processor has high I/O bandwidth, single cycle instruction execution and fast task switching support to provide efficient multisignal processing. The processor's internal memory consists of 4K words of Program Memorymore » and 4K words of Data Memory. In addition there is an external memory of 32K words for general event buffering and 16K words of Dual port Memory for input data queuing. This DSP fulfills the requirement of the Muon subdetector systems for data readout. All error handling, buffering, formatting and transferring of the data to the various trigger levels of the data acquisition system is done in software. The algorithms developed for the system complete these tasks in about 20 {micro}s per event.« less
Adaptive shaping of the behavioural and neuroendocrine phenotype during adolescence
Kaiser, Sylvia; Hennessy, Michael B.; Sachser, Norbert
2017-01-01
Environmental conditions during early life can adaptively shape the phenotype for the prevailing environment. Recently, it has been suggested that adolescence represents an additional temporal window for adaptive developmental plasticity, though supporting evidence is scarce. Previous work has shown that male guinea pigs living in large mixed-sex colonies develop a low-aggressive phenotype as part of a queuing strategy that is adaptive for integrating into large unfamiliar colonies. By contrast, males living in pairs during adolescence become highly aggressive towards strangers. Here, we tested whether the high-aggressive phenotype is adaptive under conditions of low population density, namely when directly competing with a single opponent for access to females. For that purpose, we established groups of one pair-housed male (PM), one colony-housed male (CM) and two females. PMs directed more aggression towards the male competitor and more courtship and mating towards females than did CMs. In consequence, PMs attained the dominant position in most cases and sired significantly more offspring. Moreover, they showed distinctly higher testosterone concentrations and elevated cortisol levels, which probably promoted enhanced aggressiveness while mobilizing necessary energy. Taken together, our results provide the clearest evidence to date for adaptive shaping of the phenotype by environmental influences during adolescence. PMID:28202817
NASA Astrophysics Data System (ADS)
Kumar, Love; Sharma, Vishal; Singh, Amarpal
2017-12-01
Wireless Sensor Networks (WSNs) have an assortment of application areas, for instance, civil, military, and video surveillance with restricted power resources and transmission link. To accommodate the massive traffic load in hefty sensor networks is another key issue. Subsequently, there is a necessity to backhaul the sensed information of such networks and prolong the transmission link to access the distinct receivers. Passive Optical Network (PON), a next-generation access technology, comes out as a suitable candidate for the convergence of the sensed data to the core system. The earlier demonstrated work with single-OLT-PON introduces an overloaded buffer akin to video surveillance scenarios. In this paper, to combine the bandwidth potential of PONs with the mobility capability of WSNs, the viability for the convergence of PONs and WSNs incorporating multi-optical line terminals is demonstrated to handle the overloaded OLTs. The existing M/M/1 queue theory with interleaving polling with adaptive cycle time as dynamic bandwidth algorithm is used to shun the probability of packets clash. Further, the proposed multi-sink WSN and multi-OLT PON converged structure is investigated in bidirectional mode analytically and through computer simulations. The observations establish the proposed structure competent to accommodate the colossal data traffic through less time consumption.
NASA Astrophysics Data System (ADS)
Shojima, Taiki; Ikkai, Yoshitomo; Komoda, Norihisa
An incentive attached peer to peer (P2P) electronic coupon system is proposed in which users forward e-coupons to potential users by providing incentives to those mediators. A service provider needs to acquire distribution history for incentive payment by recording UserIDs (UIDs) in the e-coupons, since this system is intended for pure P2P environment. This causes problems of dishonestly altering distribution history. In order to solve such problems, distribution history is realized in a couple of queues structure. They are the UID queue, and the public key queue. Each element of the UID queue at the initial state consists of index, a secret key, and a digital signature. In recording one's UID, the encrypted UID is enqueued to the UID queue with a new digital signature created by a secret key of the dequeued element, so that each UID cannot be altered. The public key queue provides the functionality of validating digital signatures on mobile devices. This method makes it possible both each UID and sequence of them to be certificated. The availability of the method is evaluated by quantifying risk reduction using Fault Tree Analysis (FTA). And it's recognized that the method is better than common encryption methods.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
How cognitive heuristics can explain social interactions in spatial movement.
Seitz, Michael J; Bode, Nikolai W F; Köster, Gerta
2016-08-01
The movement of pedestrian crowds is a paradigmatic example of collective motion. The precise nature of individual-level behaviours underlying crowd movements has been subject to a lively debate. Here, we propose that pedestrians follow simple heuristics rooted in cognitive psychology, such as 'stop if another step would lead to a collision' or 'follow the person in front'. In other words, our paradigm explicitly models individual-level behaviour as a series of discrete decisions. We show that our cognitive heuristics produce realistic emergent crowd phenomena, such as lane formation and queuing behaviour. Based on our results, we suggest that pedestrians follow different cognitive heuristics that are selected depending on the context. This differs from the widely used approach of capturing changes in behaviour via model parameters and leads to testable hypotheses on changes in crowd behaviour for different motivation levels. For example, we expect that rushed individuals more often evade to the side and thus display distinct emergent queue formations in front of a bottleneck. Our heuristics can be ranked according to the cognitive effort that is required to follow them. Therefore, our model establishes a direct link between behavioural responses and cognitive effort and thus facilitates a novel perspective on collective behaviour. © 2016 The Author(s).
How cognitive heuristics can explain social interactions in spatial movement
Köster, Gerta
2016-01-01
The movement of pedestrian crowds is a paradigmatic example of collective motion. The precise nature of individual-level behaviours underlying crowd movements has been subject to a lively debate. Here, we propose that pedestrians follow simple heuristics rooted in cognitive psychology, such as ‘stop if another step would lead to a collision’ or ‘follow the person in front’. In other words, our paradigm explicitly models individual-level behaviour as a series of discrete decisions. We show that our cognitive heuristics produce realistic emergent crowd phenomena, such as lane formation and queuing behaviour. Based on our results, we suggest that pedestrians follow different cognitive heuristics that are selected depending on the context. This differs from the widely used approach of capturing changes in behaviour via model parameters and leads to testable hypotheses on changes in crowd behaviour for different motivation levels. For example, we expect that rushed individuals more often evade to the side and thus display distinct emergent queue formations in front of a bottleneck. Our heuristics can be ranked according to the cognitive effort that is required to follow them. Therefore, our model establishes a direct link between behavioural responses and cognitive effort and thus facilitates a novel perspective on collective behaviour. PMID:27581483
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Collins, Leslie M.
2016-05-01
Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.
DOVIS: an implementation for high-throughput virtual screening using AutoDock.
Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques
2008-02-27
Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.
Ren, Lantian; Cafferty, Kara; Roni, Mohammad; ...
2015-06-11
This paper analyzes the rural Chinese biomass supply system and models supply chain operations according to U.S. concepts of logistical unit operations: harvest and collection, storage, transportation, preprocessing, and handling and queuing. In this paper, we quantify the logistics cost of corn stover and sweet sorghum in China under different scenarios. We analyze three scenarios of corn stover logistics from northeast China and three scenarios of sweet sorghum stalks logistics from Inner Mongolia in China. The case study estimates that the logistics cost of corn stover and sweet sorghum stalk to be $52.95/dry metric ton and $52.64/dry metric ton, respectively,more » for the current labor-based biomass logistics system. However, if the feedstock logistics operation is mechanized, the cost of corn stover and sweet sorghum stalk decreases to $36.01/dry metric ton and $35.76/dry metric ton, respectively. The study also includes a sensitivity analysis to identify the cost factors that cause logistics cost variation. Results of the sensitivity analysis show that labor price has the most influence on the logistics cost of corn stover and sweet sorghum stalk, with a variation of $6 to $12/dry metric ton.« less
Tough luck and tough choices: applying luck egalitarianism to oral health.
Albertsen, Andreas
2015-06-01
Luck egalitarianism is often taken to task for its alleged harsh implications. For example, it may seem to imply a policy of nonassistance toward uninsured reckless drivers who suffer injuries. Luck egalitarians respond to such objections partly by pointing to a number of factors pertaining to the cases being debated, which suggests that their stance is less inattentive to the plight of the victims than it might seem at first. However, the strategy leaves some cases in which the attribution of individual responsibility is appropriate (and so, it seems, is asking people to pick up the tab for their choices). One such case is oral health or significant aspects of this. It is appropriate, the paper argues, to hold people responsible for a number of factors that affect their oral health. A luck egalitarian approach inspired by John Roemer can assess whether people have acted responsibly by comparing their choices to those of their peers. A luck egalitarian approach to oral health would recommend prioritizing scarce resources in a responsibility-weighted queuing system and include copayment and general taxation among its measures of financing. © The Author 2015. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Wang, Jinghong; Lo, Siuming; Wang, Qingsong; Sun, Jinhua; Mu, Honglin
2013-08-01
Crowd density is a key factor that influences the moving characteristics of a large group of people during a large-scale evacuation. In this article, the macro features of crowd flow and subsequent rescue strategies were considered, and a series of characteristic crowd densities that affect large-scale people movement, as well as the maximum bearing density when the crowd is extremely congested, were analyzed. On the basis of characteristic crowd densities, the queuing theory was applied to simulate crowd movement. Accordingly, the moving characteristics of the crowd and the effects of typical crowd density-which is viewed as the representation of the crowd's arrival intensity in front of the evacuation passageways-on rescue strategies was studied. Furthermore, a "risk axle of crowd density" is proposed to determine the efficiency of rescue strategies in a large-scale evacuation, i.e., whether the rescue strategies are able to effectively maintain or improve evacuation efficiency. Finally, through some rational hypotheses for the value of evacuation risk, a three-dimensional distribution of the evacuation risk is established to illustrate the risk axle of crowd density. This work aims to make some macro, but original, analysis on the risk of large-scale crowd evacuation from the perspective of the efficiency of rescue strategies. © 2012 Society for Risk Analysis.
Video transmission on ATM networks. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1993-01-01
The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.
Priority queues with bursty arrivals of incoming tasks
NASA Astrophysics Data System (ADS)
Masuda, N.; Kim, J. S.; Kahng, B.
2009-03-01
Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.
Revisiting Street Intersections Using Slot-Based Systems.
Tachet, Remi; Santi, Paolo; Sobolevsky, Stanislav; Reyes-Castro, Luis Ignacio; Frazzoli, Emilio; Helbing, Dirk; Ratti, Carlo
2016-01-01
Since their appearance at the end of the 19th century, traffic lights have been the primary mode of granting access to road intersections. Today, this centuries-old technology is challenged by advances in intelligent transportation, which are opening the way to new solutions built upon slot-based systems similar to those commonly used in aerial traffic: what we call Slot-based Intersections (SIs). Despite simulation-based evidence of the potential benefits of SIs, a comprehensive, analytical framework to compare their relative performance with traffic lights is still lacking. Here, we develop such a framework. We approach the problem in a novel way, by generalizing classical queuing theory. Having defined safety conditions, we characterize capacity and delay of SIs. In the 2-road crossing configuration, we provide a capacity-optimal SI management system. For arbitrary intersection configurations, near-optimal solutions are developed. Results theoretically show that transitioning from a traffic light system to SI has the potential of doubling capacity and significantly reducing delays. This suggests a reduction of non-linear dynamics induced by intersection bottlenecks, with positive impact on the road network. Such findings can provide transportation engineers and planners with crucial insights as they prepare to manage the transition towards a more intelligent transportation infrastructure in cities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Lantian; Cafferty, Kara; Roni, Mohammad
This paper analyzes the rural Chinese biomass supply system and models supply chain operations according to U.S. concepts of logistical unit operations: harvest and collection, storage, transportation, preprocessing, and handling and queuing. In this paper, we quantify the logistics cost of corn stover and sweet sorghum in China under different scenarios. We analyze three scenarios of corn stover logistics from northeast China and three scenarios of sweet sorghum stalks logistics from Inner Mongolia in China. The case study estimates that the logistics cost of corn stover and sweet sorghum stalk to be $52.95/dry metric ton and $52.64/dry metric ton, respectively,more » for the current labor-based biomass logistics system. However, if the feedstock logistics operation is mechanized, the cost of corn stover and sweet sorghum stalk decreases to $36.01/dry metric ton and $35.76/dry metric ton, respectively. The study also includes a sensitivity analysis to identify the cost factors that cause logistics cost variation. Results of the sensitivity analysis show that labor price has the most influence on the logistics cost of corn stover and sweet sorghum stalk, with a variation of $6 to $12/dry metric ton.« less
Mapping edge-based traffic measurements onto the internal links in MPLS network
NASA Astrophysics Data System (ADS)
Zhao, Guofeng; Tang, Hong; Zhang, Yi
2004-09-01
Applying multi-protocol label switching techniques to IP-based backbone for traffic engineering goals has shown advantageous. Obtaining a volume of load on each internal link of the network is crucial for traffic engineering applying. Though collecting can be available for each link, such as applying traditional SNMP scheme, the approach may cause heavy processing load and sharply degrade the throughput of the core routers. Then monitoring merely at the edge of the network and mapping the measurements onto the core provides a good alternative way. In this paper, we explore a scheme for traffic mapping with edge-based measurements in MPLS network. It is supposed that the volume of traffic on each internal link over the domain would be mapped onto by measurements available only at ingress nodes. We apply path-based measurements at ingress nodes without enabling measurements in the core of the network. We propose a method that can infer a path from the ingress to the egress node using label distribution protocol without collecting routing data from core routers. Based on flow theory and queuing theory, we prove that our approach is effective and present the algorithm for traffic mapping. We also show performance simulation results that indicate potential of our approach.
Radio and Optical Telescopes for School Students and Professional Astronomers
NASA Astrophysics Data System (ADS)
Hosmer, Laura; Langston, G.; Heatherly, S.; Towner, A. P.; Ford, J.; Simon, R. S.; White, S.; O'Neil, K. L.; Haipslip, J.; Reichart, D.
2013-01-01
The NRAO 20m telescope is now on-line as a part of UNC's Skynet worldwide telescope network. The NRAO is completing integration of radio astronomy tools with the Skynet web interface. We present the web interface and astronomy projects that allow students and astronomers from all over the country to become Radio Astronomers. The 20 meter radio telescope at NRAO in Green Bank, WV is dedicated to public education and also is part of an experiment in public funding for astronomy. The telescope has a fantastic new web-based interface, with priority queuing, accommodating priority for paying customers and enabling free use of otherwise unused time. This revival included many software and hardware improvements including automatic calibration and improved time integration resulting in improved data processing, and a new ultra high resolution spectrometer. This new spectrometer is optimized for very narrow spectral lines, which will allow astronomers to study complex molecules and very cold regions of space in remarkable detail. In accordance with focusing on broader impacts, many public outreach and high school education activities have been completed with many confirmed future activities. The 20 meter is now a fully automated, powerful tool capable of professional grade results available to anyone in the world. Drop by our poster and try out real-time telescope control!
NASA Astrophysics Data System (ADS)
Ahmad, Afandi; Roslan, Muhammad Faris; Amira, Abbes
2017-09-01
In high jump sports, approach take-off speed and force during the take-off are two (2) main important parts to gain maximum jump. To measure both parameters, wireless sensor network (WSN) that contains microcontroller and sensor are needed to describe the results of speed and force for jumpers. Most of the microcontroller exhibit transmission issues in terms of throughput, latency and cost. Thus, this study presents the comparison of wireless microcontrollers in terms of throughput, latency and cost, and the microcontroller that have best performances and cost will be implemented in high jump wearable device. In the experiments, three (3) parts have been integrated - input, process and output. Force (for ankle) and global positioning system (GPS) sensor (for body waist) acts as an input for data transmission. These data were then being processed by both microcontrollers, ESP8266 and Arduino Yun Mini to transmit the data from sensors to the server (host-PC) via message queuing telemetry transport (MQTT) protocol. The server acts as receiver and the results was calculated from the MQTT log files. At the end, results obtained have shown ESP8266 microcontroller had been chosen since it achieved high throughput, low latency and 11 times cheaper in term of prices compared to Arduino Yun Mini microcontroller.
A stochastic multi-scale method for turbulent premixed combustion
NASA Astrophysics Data System (ADS)
Cha, Chong M.
2002-11-01
The stochastic chemistry algorithm of Bunker et al. and Gillespie is used to perform the chemical reactions in a transported probability density function (PDF) modeling approach of turbulent combustion. Recently, Kraft & Wagner have demonstrated a 100-fold gain in computational speed (for a 100 species mechanism) using the stochastic approach over the conventional, direct integration method of solving for the chemistry. Here, the stochastic chemistry algorithm is applied to develop a new transported PDF model of turbulent premixed combustion. The methodology relies on representing the relevant spatially dependent physical processes as queuing events. The canonical problem of a one-dimensional premixed flame is used for validation. For the laminar case, molecular diffusion is described by a random walk. For the turbulent case, one of two different material transport submodels can provide the necessary closure: Taylor dispersion or Kerstein's one-dimensional turbulence approach. The former exploits ``eddy diffusivity'' and hence would be much more computationally tractable for practical applications. Various validation studies are performed. Results from the Monte Carlo simulations compare well to asymptotic solutions of laminar premixed flames, both with and without high activation temperatures. The correct scaling of the turbulent burning velocity is predicted in both Damköhler's small- and large-scale turbulence limits. The effect of applying the eddy diffusivity concept in the various regimes is discussed.
Scheduling and control strategies for the departure problem in air traffic control
NASA Astrophysics Data System (ADS)
Bolender, Michael Alan
Two problems relating to the departure problem in air traffic control automation are examined. The first problem that is addressed is the scheduling of aircraft for departure. The departure operations at a major US hub airport are analyzed, and a discrete event simulation of the departure operations is constructed. Specifically, the case where there is a single departure runway is considered. The runway is fed by two queues of aircraft. Each queue, in turn, is fed by a single taxiway. Two salient areas regarding scheduling are addressed. The first is the construction of optimal departure sequences for the aircraft that are queued. Several greedy search algorithms are designed to minimize the total time to depart a set of queued aircraft. Each algorithm has a different set of heuristic rules to resolve situations within the search space whenever two branches of the search tree with equal edge costs are encountered. These algorithms are then compared and contrasted with a genetic search algorithm in order to assess the performance of the heuristics. This is done in the context of a static departure problem where the length of the departure queue is fixed. A greedy algorithm which deepens the search whenever two branches of the search tree with non-unique costs are encountered is shown to outperform the other heuristic algorithms. This search strategy is then implemented in the discrete event simulation. A baseline performance level is established, and a sensitivity analysis is performed by implementing changes in traffic mix, routing, and miles-in-trail restrictions for comparison. It is concluded that to minimize the average time spent in the queue for different traffic conditions, a queue assignment algorithm is needed to maintain an even balance of aircraft in the queues. A necessary consideration is to base queue assignment upon traffic management restrictions such as miles-in-trail constraints. The second problem addresses the technical challenges associated with merging departure aircraft onto their filed routes in a congested airspace environment. Conflicts between departures and en route aircraft within the Center airspace are analyzed. Speed control, holding the aircraft; at an intermediate altitude, re-routing, and vectoring are posed as possible deconfliction maneuvers. A cost assessment of these merge strategies, which are based upon 4D fight management and conflict detection and resolution principles, is given. Several merge conflicts are studied and a cost for each resolution is computed. It is shown that vectoring tends to be the most expensive resolution technique. Altitude hold is simple, costs less than vectoring, but may require a long time for the aircraft to achieve separation. Re-routing is the simplest, and provides the most cost benefit since the aircraft flies a shorter distance than if it had followed its filed route. Speed control is shown to be ineffective as a means of increasing separation, but is effective for maintaining separation between aircraft. In addition, the affects of uncertainties on the cost are assessed. The analysis shows that cost is invariant with the decision time.
A Cellular Neural Networks Based DiffServ Switch for Satellite Communication Systems
NASA Astrophysics Data System (ADS)
Tarchi, Daniele; Fantacci, Romano; Gubellini, Roberto; Pecorella, Tommaso
2003-07-01
Recent developments of Internet services and advanced compression methods has revived interest on IP based multimedia satellite communication systems. However a main problem arising here is to guarantee specific Quality of Service (QoS) constraints in order to have good performance for each traffic class.Among various QoS approach used in Internet, recently the DiffServ technique has became the most promising so- lution, mainly for its simplicity with respect to different alternatives. Moreover, in satellite communication systems, DiffServ policy computational capabilities are placed at the edge points (end-to-end philosophy); this is very important for a network constituted by one satellite link because it allows to reduce the implementation complexity of the satellite on-board equipments.The satellite switch under consideration makes use of the Multiple Input Queuing approach. Packets arrived at a switch input are stored in a shared buffer but they are logically ordered in individual queues, one for each possible output link. According to the DiffServ policy, within a same logical queue, packets are reordered in individual sub-queues according to the priority. A suitable implementation of the DiffServ policy based on a Cellular Neural Network (CNN) is proposed in the paper in order to achieve QoS requirements.The CNNs are a set of linear and nonlinear circuits connected among them that allow parallel and asynchronous computation. CNNs are a class of neural networks similar to Hopfield Neural Networks (HNN), but more flexible and suitable for solving the output contention problem, inherent of switching systems, for VLSI implementation.In this paper a CNN has been designed in order to maximize a cost functional, related to the on-board switch through- put and QoS constraints. The initial state for each neural cell is obtained looking at the presence of at least one packet from a certain input logical queue to a specific output line. The input value for each neural cell is a function of priority and length of each input logical queue. The versatility of neural network make feasible to take the best decision for the packet to be delivered to each output satellite beam, in order to meet specific QoS constraints. Numerical results for CNN approach highlights that Neural network convergence within a time slot is guaranteed, and an optimal, or at least near-optimal, solution in terms of cost function is achieved.The proposed system is based on the IETF (Internet Engineering Task Force) recommendations; this means that traffic entering the switching fabric could be marked as Expedited Forward (EF) or Assured Forward (AF), otherwise handled as Best Effort (BE). Two Assured Forward classes with different emission priority have been implemented, taking into account time spent inside the logical queue and its length. Expedited Forward traffic is typical of services to be delivered with the maximum priority, as streaming or interactive services. The packets, belonging to services that need a certain level of priority with low packet loss, are marked as Assured Forward. Best Effort traffic is related to e-mail or file transfer, or other that have not particular QoS requirements. The CNN used to solve conflict situations act as an arbiter for all the output links. Differently from other Multiple Input Queuing approach, where one arbiter for each output line is present, in proposed approach there exist only one arbiter that make the best decision. The selected rule has been defined in order to give priority to packets, according to opportunely defined functionals characteristic of each traffic class, under the constraint that no more than one packet can be delivered to the same output line. The functionals depend on queue length and time spent inside the queue by front packet.The performance of the proposed DiffServ switch has been derived in terms of delay and jitter; buffer occupancy has been analyzed for different configuration, such as a unique common buffer, one buffer for each input line, one buffer for each input line and each priority class.The obtained results highlight an high flexibility of satellite switch with CNN, taking into account that functional used to calculate priority of each queue could be easily changed, without any complexity gain nor change in CNN structure, in order to consider different traffic characteristic. Numerical results show that proposed algorithm outperform the switches based on Multiple Input Queuing, that use strictly priority methods, in terms of delay and jitter. Different buffer size have been also considered in order to analyze packet loss for CNN switch algorithm, comparing different configuration described above.The good behavior of the proposed DiffServ switch has been verified in the case of traffic with pareto distribution for packet length and a geometrical distribution for packet interarrival time, highlighting good performance in terms of delay and jitter. Numerical results also demonstrate the stability of this method for heavy load traffic; in particular maximum permitted load is higher for higher priority classes.
A decentralized software bus based on IP multicas ting
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd
1995-01-01
We describe decentralized reconfigurable implementation of a conference management system based on the low-level Internet Protocol (IP) multicasting protocol. IP multicasting allows low-cost, world-wide, two-way transmission of data between large numbers of conferencing participants through the Multicasting Backbone (MBone). Each conference is structured as a software bus -- a messaging system that provides a run-time interconnection model that acts as a separate agent (i.e., the bus) for routing, queuing, and delivering messages between distributed programs. Unlike the client-server interconnection model, the software bus model provides a level of indirection that enhances the flexibility and reconfigurability of a distributed system. Current software bus implementations like POLYLITH, however, rely on a centralized bus process and point-to-point protocols (i.e., TCP/IP) to route, queue, and deliver messages. We implement a software bus called the MULTIBUS that relies on a separate process only for routing and uses a reliable IP multicasting protocol for delivery of messages. The use of multicasting means that interconnections are independent of IP machine addresses. This approach allows reconfiguration of bus participants during system execution without notifying other participants of new IP addresses. The use of IP multicasting also permits an economy of scale in the number of participants. We describe the MULITIBUS protocol elements and show how our implementation performs better than centralized bus implementations.
Homology Model of the GABAA Receptor Examined Using Brownian Dynamics
O'Mara, Megan; Cromer, Brett; Parker, Michael; Chung, Shin-Ho
2005-01-01
We have developed a homology model of the GABAA receptor, using the subunit combination of α1β2γ2, the most prevalent type in the mammalian brain. The model is produced in two parts: the membrane-embedded channel domain and the extracellular N-terminal domain. The pentameric transmembrane domain model is built by modeling each subunit by homology with the equivalent subunit of the heteropentameric acetylcholine receptor transmembrane domain. This segment is then joined with the extracellular domain built by homology with the acetylcholine binding protein. The all-atom model forms a wide extracellular vestibule that is connected to an oval chamber near the external surface of the membrane. A narrow, cylindrical transmembrane channel links the outer segment of the pore to a shallow intracellular vestibule. The physiological properties of the model so constructed are examined using electrostatic calculations and Brownian dynamics simulations. A deep energy well of ∼80 kT accommodates three Cl− ions in the narrow transmembrane channel and seven Cl− ions in the external vestibule. Inward permeation takes place when one of the ions queued in the external vestibule enters the narrow segment and ejects the innermost ion. The model, when incorporated into Brownian dynamics, reproduces key experimental features, such as the single-channel current-voltage-concentration profiles. Finally, we simulate the γ2 K289M epilepsy inducing mutation and examine Cl− ion permeation through the mutant receptor. PMID:15749776
Applying Bayesian hierarchical models to examine motorcycle crashes at signalized intersections.
Haque, Md Mazharul; Chin, Hoong Chor; Huang, Helai
2010-01-01
Motorcycles are overrepresented in road traffic crashes and particularly vulnerable at signalized intersections. The objective of this study is to identify causal factors affecting the motorcycle crashes at both four-legged and T signalized intersections. Treating the data in time-series cross-section panels, this study explores different Hierarchical Poisson models and found that the model allowing autoregressive lag-1 dependence specification in the error term is the most suitable. Results show that the number of lanes at the four-legged signalized intersections significantly increases motorcycle crashes largely because of the higher exposure resulting from higher motorcycle accumulation at the stop line. Furthermore, the presence of a wide median and an uncontrolled left-turn lane at major roadways of four-legged intersections exacerbate this potential hazard. For T signalized intersections, the presence of exclusive right-turn lane at both major and minor roadways and an uncontrolled left-turn lane at major roadways increases motorcycle crashes. Motorcycle crashes increase on high-speed roadways because they are more vulnerable and less likely to react in time during conflicts. The presence of red light cameras reduces motorcycle crashes significantly for both four-legged and T intersections. With the red light camera, motorcycles are less exposed to conflicts because it is observed that they are more disciplined in queuing at the stop line and less likely to jump start at the start of green.
A novel client service quality measuring model and an eHealthcare mitigating approach.
Cheng, L M; Choi, Wai Ping Choi; Wong, Anita Yiu Ming
2016-07-01
Facing population ageing in Hong Kong, the demand of long-term elderly health care services is increasing. The challenge is to support a good quality service under the constraints faced by recent shortage of nursing and care services professionals without redesigning the work flow operated in the existing elderly health care industries. the existing elderly health care industries. The Total QoS measure based on Finite Capacity Queuing Model is a reliable method and an effective measurement for Quality of services. The value is good for measuring the staffing level and offers a measurement for efficiency enhancement when incorporate new technologies like ICT. The implemented system has improved the Quality of Service by more than 14% and the extra released manpower resource will allow clinical care provider to offer further value added services without actually increasing head count. We have developed a novel Quality of Service measurement for Clinical Care services based on multi-queue using finite capacity queue model M/M/c/K/n and the measurement is useful for estimating the shortage of staff resource in a caring institution. It is essential for future integration with the existing widely used assessment model to develop reliable measuring limits which allow an effective measurement of public fund used in health care industries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cruz-Piris, Luis; Rivera, Diego; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R
2018-03-20
Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal.
A Modern Operating System for Near-real-time Environmental Observatories
NASA Astrophysics Data System (ADS)
Orcutt, John; Vernon, Frank
2014-05-01
The NSF Ocean Observatory Initiative (OOI) provided an opportunity for expanding the capabilities for managing open, near-real-time (latencies of seconds) data from ocean observatories. The sensors deployed in this system largely return data from seafloor, cabled fiber optic cables as well as satellite telemetry. Bandwidth demands range from high-definition movies to the transmission of data via Iridium satellite. The extended Internet also provides an opportunity to not only return data, but to also control the sensors and platforms that comprise the observatory. The data themselves are openly available to any users. In order to provide heightened network security and overall reliability, the connections to and from the sensors/platforms are managed without Layer 3 of the Internet, but instead rely upon message passing using an open protocol termed Advanced Queuing Messaging Protocol (AMQP). The highest bandwidths in the system are in the Regional Scale Network (RSN) off Oregon and Washington and on the continent with highly reliable network connections between observatory components at 10 Gbps. The maintenance of metadata and life cycle histories of sensors and platforms is critical for providing data provenance over the years. The integrated cyberinfrastructure is best thought of as an operating system for the observatory - like the data, the software is also open and can be readily applied to new observatories, for example, in the rapidly evolving Arctic.
Earthworks logistics in the high density urban development conditions - case study
NASA Astrophysics Data System (ADS)
Sobotka, A.; Blajer, M.
2017-10-01
Realisation of the construction projects on highly urbanised areas carries many difficulties and logistic problems. Earthworks conducted in such conditions constitute a good example of how important it is to properly plan the works and use the technical means of the logistics infrastructure. The construction processes on the observed construction site, in combination with their external logistics service are a complex system, difficult for mathematical modelling and achievement of appropriate data for planning the works. The paper shows describe and analysis of earthworks during construction of the Centre of Power Engineering of AGH in Krakow for two stages of a construction project. At the planning stage in the preparatory phase (before realization) and in the implementation phase of construction works (foundation). In the first case, an example of the use of queuing theory for prediction of excavation time under random work conditions of the excavator and the associated trucks is provided. In the second case there is a change of foundation works technology resulting as a consequence of changes in logistics earthworks. Observation of the construction has confirmed that the use of appropriate methods of construction works management, and in this case agile management, the time and cost of the project have not been exceeded. The success of a project depends on the ability of the contractor to react quickly when changes occur in the design, technology, environment, etc.
NASA Astrophysics Data System (ADS)
Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto
2017-10-01
Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.
NASA Astrophysics Data System (ADS)
Seyedhosseini, Seyed Mohammad; Makui, Ahmad; Shahanaghi, Kamran; Torkestani, Sara Sadat
2016-09-01
Determining the best location to be profitable for the facility's lifetime is the important decision of public and private firms, so this is why discussion about dynamic location problems (DLPs) is a critical significance. This paper presented a comprehensive review from 1968 up to most recent on published researches about DLPs and classified them into two parts. First, mathematical models developed based on different characteristics: type of parameters (deterministic, probabilistic or stochastic), number and type of objective function, numbers of commodity and modes, relocation time, number of relocation and relocating facilities, time horizon, budget and capacity constraints and their applicability. In second part, It have been also presented solution algorithms, main specification, applications and some real-world case studies of DLPs. At the ends, we concluded that in the current literature of DLPs, distribution systems and production-distribution systems with simple assumption of the tackle to the complexity of these models studied more than any other fields, as well as the concept of variety of services (hierarchical network), reliability, sustainability, relief management, waiting time for services (queuing theory) and risk of facility disruption need for further investigation. All of the available categories based on different criteria, solution methods and applicability of them, gaps and analysis which have been done in this paper suggest the ways for future research.
Vanquelef, Enguerran; Simon, Sabrina; Marquant, Gaelle; Garcia, Elodie; Klimerak, Geoffroy; Delepine, Jean Charles; Cieplak, Piotr; Dupradeau, François-Yves
2011-07-01
R.E.D. Server is a unique, open web service, designed to derive non-polarizable RESP and ESP charges and to build force field libraries for new molecules/molecular fragments. It provides to computational biologists the means to derive rigorously molecular electrostatic potential-based charges embedded in force field libraries that are ready to be used in force field development, charge validation and molecular dynamics simulations. R.E.D. Server interfaces quantum mechanics programs, the RESP program and the latest version of the R.E.D. tools. A two step approach has been developed. The first one consists of preparing P2N file(s) to rigorously define key elements such as atom names, topology and chemical equivalencing needed when building a force field library. Then, P2N files are used to derive RESP or ESP charges embedded in force field libraries in the Tripos mol2 format. In complex cases an entire set of force field libraries or force field topology database is generated. Other features developed in R.E.D. Server include help services, a demonstration, tutorials, frequently asked questions, Jmol-based tools useful to construct PDB input files and parse R.E.D. Server outputs as well as a graphical queuing system allowing any user to check the status of R.E.D. Server jobs.
NASA Astrophysics Data System (ADS)
Impemba, Ernesto; Inzerilli, Tiziano
2003-07-01
Integration of satellite access networks with the Internet is seen as a strategic goal to achieve in order to provide ubiquitous broadband access to Internet services in Next Generation Networks (NGNs). One of the main interworking aspects which has been most studied is an efficient management of satellite resources, i.e. bandwidth and buffer space, in order to satisfy most demanding application requirements as to delay control and bandwidth assurance. In this context, resource management in DVB-S/DVB-RCS satellite technologies, emerging technologies for broadband satellite access and transport of IP applications, is a research issue largely investigated as a means to provide efficient bi-directional communications across satellites. This is in particular one of the principal goals of the SATIP6 project, sponsored within the 5th EU Research Programme Framework, i.e. IST. In this paper we present a possible approach to efficiently exploit bandwidth, the most critical resource in a broadband satellite access network, while pursuing satisfaction of delay and bandwidth requirements for applications with guaranteed QoS through a traffic control architecture to be implemented in ground terminals. Performance of this approach is assessed in terms of efficient exploitation of the uplink bandwidth and differentiation and minimization of queuing delays for most demanding applications over a time-varying capacity. Opnet simulations is used as analysis tool.
Redesigning emergency department patient flows: application of Lean Thinking to health care.
King, Diane L; Ben-Tovim, David I; Bassham, Jane
2006-08-01
To describe in some detail the methods used and outcome of an application of concepts from Lean Thinking in establishing streams for patient flows in a teaching general hospital ED. Detailed understanding was gained through process mapping with staff followed by the identification of value streams (those patients likely to be discharged from the ED, those who were likely to be admitted) and the implementation of a process of seeing those patients that minimized complex queuing in the ED. Streaming had a significant impact on waiting times and total durations of stay in the ED. There was a general flattening of the waiting time across all groups. A slight increase in wait for Triage categories 2 and 3 patients was offset by reductions in wait for Triage category 4 patients. All groups of patients spent significantly less overall time in the department and the average number of patients in the ED at any time decreased. There was a significant reduction in number of patients who do not wait and a slight decrease in access block. The streaming of patients into groups of patients cared for by a specific team of doctors and nurses, and the minimizing of complex queues in this ED by altering the practices in relation to the function of the Australasian Triage Scale improved patient flow, thereby decreasing potential for overcrowding.
Automated Traffic Management System and Method
NASA Technical Reports Server (NTRS)
Glass, Brian J. (Inventor); Spirkovska, Liljana (Inventor); McDermott, William J. (Inventor); Reisman, Ronald J. (Inventor); Gibson, James (Inventor); Iverson, David L. (Inventor)
2000-01-01
A data management system and method that enables acquisition, integration, and management of real-time data generated at different rates, by multiple heterogeneous incompatible data sources. The system achieves this functionality by using an expert system to fuse data from a variety of airline, airport operations, ramp control, and air traffic control tower sources, to establish and update reference data values for every aircraft surface operation. The system may be configured as a real-time airport surface traffic management system (TMS) that electronically interconnects air traffic control, airline data, and airport operations data to facilitate information sharing and improve taxi queuing. In the TMS operational mode, empirical data shows substantial benefits in ramp operations for airlines, reducing departure taxi times by about one minute per aircraft in operational use, translating as $12 to $15 million per year savings to airlines at the Atlanta, Georgia airport. The data management system and method may also be used for scheduling the movement of multiple vehicles in other applications, such as marine vessels in harbors and ports, trucks or railroad cars in ports or shipping yards, and railroad cars in switching yards. Finally, the data management system and method may be used for managing containers at a shipping dock, stock on a factory floor or in a warehouse, or as a training tool for improving situational awareness of FAA tower controllers, ramp and airport operators, or commercial airline personnel in airfield surface operations.
Wills, W; Backett-Milburn, K; Gregory, S; Lawton, J
2005-08-01
In this paper, we explore the secondary school environment as an important context for understanding young teenagers' eating habits and food practices. We draw on data collected during semi-structured interviews with 36 young teenagers (aged 13/14 years) living in disadvantaged circumstances in Scotland. We found that the systems inherent in school had an impact on what, where and when participants ate their lunch. Each school had rules governing use of the school dining hall and participants sometimes chose to leave this environment to buy food outside school premises. Our interviews showed that parents determined how much money young people took to school and, therefore, had some control over their food choices. Participants rarely spoke of giving priority to food and eating during the non-curriculum parts of the school day, preferring to spend time 'hanging out' with friends. Eating with friends was sometimes reported as a cause of anxiety, particularly when participants had concerns about body image, appetite or appearance. We suggest that young teenagers' dislike for queuing for food, their ability to budget for food at school and their desire to maximize time spent with friends influence food choices; therefore, these are issues which have implications for health education and will be of interest to those responsible for school meal provision.
2018-01-01
Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal. PMID:29558406
A novel approach to multihazard modeling and simulation.
Smith, Silas W; Portelli, Ian; Narzisi, Giuseppe; Nelson, Lewis S; Menges, Fabian; Rekow, E Dianne; Mincer, Joshua S; Mishra, Bhubaneswar; Goldfrank, Lewis R
2009-06-01
To develop and apply a novel modeling approach to support medical and public health disaster planning and response using a sarin release scenario in a metropolitan environment. An agent-based disaster simulation model was developed incorporating the principles of dose response, surge response, and psychosocial characteristics superimposed on topographically accurate geographic information system architecture. The modeling scenarios involved passive and active releases of sarin in multiple transportation hubs in a metropolitan city. Parameters evaluated included emergency medical services, hospital surge capacity (including implementation of disaster plan), and behavioral and psychosocial characteristics of the victims. In passive sarin release scenarios of 5 to 15 L, mortality increased nonlinearly from 0.13% to 8.69%, reaching 55.4% with active dispersion, reflecting higher initial doses. Cumulative mortality rates from releases in 1 to 3 major transportation hubs similarly increased nonlinearly as a function of dose and systemic stress. The increase in mortality rate was most pronounced in the 80% to 100% emergency department occupancy range, analogous to the previously observed queuing phenomenon. Effective implementation of hospital disaster plans decreased mortality and injury severity. Decreasing ambulance response time and increasing available responding units reduced mortality among potentially salvageable patients. Adverse psychosocial characteristics (excess worry and low compliance) increased demands on health care resources. Transfer to alternative urban sites was possible. An agent-based modeling approach provides a mechanism to assess complex individual and systemwide effects in rare events.
VizieR Online Data Catalog: UBVR photometry of the T Tauri binary DQ Tau (Tofflemire+, 2017)
NASA Astrophysics Data System (ADS)
Tofflemire, B. M.; Mathieu, R. D.; Ardila, D. R.; Akeson, R. L.; Ciardi, D. R.; Johns-Krull, C.; Herczeg, G. J.; Quijano-Vodniza, A.
2017-08-01
The Las Cumbres Observatories Global Telescope (LCOGT) 1m network consists of nine 1m telescopes spread across four international sites: McDonald Observatory (USA), CTIO (Chile), SAAO (South Africa), and Siding Springs Observatory (Australia). Over the 2014-2015 winter observing season, our program requested queued "visits" of DQ Tau 20 times per orbital cycle for 10 continuous orbital periods. Given the orbital period of DQ Tau, the visit cadence corresponded to ~20hr. Each visit consisted of three observations in each of the broadband UBVRIY and narrowband Hα and Hβ filters, requiring ~20 minutes. In this work we present only the UBVR observations, which overlap with our high-cadence observations. Indeed, two eight-night observing runs centered on separate periastron passages of DQ Tau (orbital cycles 3 and 5 in Figure 1) were obtained from the WIYN 0.9m telescope located at the Kitt Peak National Observatory. In addition to our two eight-night observing runs, a synoptic observation program was also in place at the WIYN 0.9m that provided approximately weekly observations of DQ Tau in UBVR during the 2014-B semester. Also, using Apache Point Observatory's ARCSAT 0.5m telescope, we performed observing runs of seven and ten nights centered on two separate periastron passaged of DQ Tau (orbital cycles 2 and 7 in Figure 1). (1 data file).
NASA Astrophysics Data System (ADS)
Aulenbach, S. M.; Berukoff, S. J.
2010-12-01
The National Ecological Observatory Network (NEON) will collect data across the United States on the impacts of climate change, land use change and invasive species on ecosystem functions and biodiversity. In-situ sampling and distributed sensor networks, linked by an advanced cyberinfrastructure, will collect site-based data on a variety of organisms, soils, aquatic systems, atmosphere and climate. Targeted airborne remote sensing observations made by NEON as well as geographical data sets and satellite resources produced by Federal agencies will provide data at regional and national scales. The resulting data streams, collected over a 30-year period, will be synthesized into fully traceable information products that are freely and openly accessible to all users. We provide an overview of several collection, access and presentation technologies evaluated for use by observatory systems throughout the data product life cycle. Specifically, we discuss smart phone applications for citizen scientists as well as the use of handheld devices for sample collection and reporting from the field. Protocols for storing, queuing, and retrieving data from observatory sites located throughout the nation are highlighted as are the application of standards throughout the pipelined production of data products. We discuss the automated incorporation of provenance information and digital object identifiers for published data products. The use of widgets and personalized user portals for the discovery and dissemination of NEON data products are also presented.
Real-Time Optical Surveillance of LEO/MEO with Small Telescopes
NASA Astrophysics Data System (ADS)
Zimmer, P.; McGraw, J.; Ackermann, M.
J.T. McGraw and Associates, LLC operates two proof-of-concept wide-field imaging systems to test novel techniques for uncued surveillance of LEO/MEO/GEO and, in collaboration with the University of New Mexico (UNM), uses a third small telescope for rapidly queued same-orbit follow-up observations. Using our GPU-accelerated detection scheme, the proof-of-concept systems operating at sites near and within Albuquerque, NM, have detected objects fainter than V=13 at greater than 6 sigma significance. This detection approximately corresponds to a 16 cm object with albedo of 0.12 at 1000 km altitude. Dozens of objects are measured during each operational twilight period, many of which have no corresponding catalog object. The two proof-of-concept systems, separated by ~30km, work together by taking simultaneous images of the same orbital volume to constrain the orbits of detected objects using parallax measurements. These detections are followed-up by imaging photometric observations taken at UNM to confirm and further constrain the initial orbit determination and independently assess the objects and verify the quality of the derived orbits. This work continues to demonstrate that scalable optical systems designed for real-time detection of fast moving objects, which can be then handed off to other instruments capable of tracking and characterizing them, can provide valuable real-time surveillance data at LEO and beyond, which substantively informs the SSA process.
Patch dynamics of a foraging assemblage of bees.
Wright, David Hamilton
1985-03-01
The composition and dynamics of foraging assemblages of bees were examined from the standpoint of species-level arrival and departure processes in patches of flowers. Experiments with bees visiting 4 different species of flowers in subalpine meadows in Colorado gave the following results: 1) In enriched patches the rates of departure of bees were reduced, resulting in increases in both the number of bees per species and the average number of species present. 2) The reduction in bee departure rates from enriched patches was due to mechanical factors-increased flower handling time, and to behavioral factors-an increase in the number of flowers visited per inflorescence and in the number of inflorescences visited per patch. Bees foraging in enriched patches could collect nectar 30-45% faster than those foraging in control patches. 3) The quantitative changes in foraging assemblages due to enrichment, in terms of means and variances of species population sizes, fraction of time a species was present in a patch, and in mean and variance of the number of species present, were in reasonable agreement with predictions drawn from queuing theory and studies in island biogeography. 4) Experiments performed with 2 species of flowers with different corolla tube lengths demonstrated that manipulation of resources of differing availability had unequal effects on particular subsets of the larger foraging community. The arrival-departure process of bees on flowers and the immigration-extinction process of species on islands are contrasted, and the value of the stochastic, species-level approach to community composition is briefly discussed.
Models of emergency departments for reducing patient waiting times.
Laskowski, Marek; McLeod, Robert D; Friesen, Marcia R; Podaima, Blake W; Alfa, Attahiru S
2009-07-02
In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial-topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-05-28
Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-01-01
Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045
BrainIACS: a system for web-based medical image processing
NASA Astrophysics Data System (ADS)
Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.
2009-02-01
We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as "smart" Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users.
Optimizing the night time with dome vents and SNR-QSO at CFHT
NASA Astrophysics Data System (ADS)
Devost, Daniel; Mahoney, Billy; Moutou, Claire; CFHT QSO Team, CFHT software Group
2017-06-01
Night time is a precious and costly commodity and it is important to get everything we can out of every second of every night of observing. In 2012 the Canada-France-Hawaii Telescope started operating 12 new vent doors installed on the dome over the course of the previous two years. The project was highly successful and seeing measurements show that venting the dome greatly enhances image quality at the focal plane. In order to capitalize on the gains brought by the new vents, the observatory started exploring a new mode of observation called SNR-QSO. This mode consist of a new implementation inside our Queued Service Observation (QSO) system. Exposure times are adjusted for each frame depending on the weather conditions in order to reach a specific depth, Signal to Noise Ratio (SNR) at a certain magnitude. The goal of this new mode is to capitalize on the exquisite seeing provided by Maunakea, complemented by the minimized dome turbulence, to use the least amount of time to reach the depth required by the science programs. Specific implementations were successfully tested on two different instruments, our wide field camera MegaCam and our high resolution spectrograph ESPaDOnS. I will present the methods used for each instrument to achieve SNR observing and the gains produced by these new observing modes in order to reach the scientific goals of accepted programs in a shorter amount of time.
Optimisation of a honeybee-colony's energetics via social learning based on queuing delays
NASA Astrophysics Data System (ADS)
Thenius, Ronald; Schmickl, Thomas; Crailsheim, Karl
2008-06-01
Natural selection shaped the foraging-related processes of honeybees in such a way that a colony can react to changing environmental conditions optimally. To investigate this complex dynamic social system, we developed a multi-agent model of the nectar flow inside and outside of a honeybee colony. In a honeybee colony, a temporal caste collects nectar in the environment. These foragers bring their harvest into the colony, where they unload their nectar loads to one or more storer bees. Our model predicts that a cohort of foragers, collecting nectar from a single nectar source, is able to detect changes in quality in other food sources they have never visited, via the nectar processing system of the colony. We identified two novel pathways of forager-to-forager communication. Foragers can gain information about changes in the nectar flow in the environment via changes in their mean waiting time for unloadings and the number of experienced multiple unloadings. This way two distinct groups of foragers that forage on different nectar sources and that never communicate directly can share information via a third cohort of worker bees. We show that this noisy and loosely knotted social network allows a colony to perform collective information processing, so that a single forager has all necessary information available to be able to 'tune' its social behaviour, like dancing or dance-following. This way the net nectar gain of the colony is increased.
Beam queuing for aeronautical free space optical networks
NASA Astrophysics Data System (ADS)
Karras, Kimon; Marinos, Dimitris; Kouros, Pavlos
2010-08-01
Free space optical technologies are currently only very marginally used in aviation, particularly for communication purposes. Most applications occur in a military environment, with civilian aviation remaining oblivious to its advantages. One of these is high-bandwidth communication between the various actors available in an aeronautical network. Considerable research is underway in order to resolve a multitude of issues like reliable reception and transmission of the optical signal and the construction of high performance, small and lightweight terminals for the optical transceiver. The slow Pointing, Acquisition and Tracking of the latter represents a significant issue, which detracts from their usability in such an environment. Since an aircraft may carry only a limited number of such terminals on board, the delay of a terminal in reacquiring a target (which is in the order of several seconds) constitutes a significant hurdle in achieving satisfactory connectivity. This paper proposes an optimization technique, in which packet are reordered dynamically before transmission in the sender node in order to minimize terminal movement and thus avoid the time-consuming PAT process. Several parameters are considered such as QoS of the packets, minimization of the number of movements of the terminal and of the distance it must traverse when it reacquires a target. The algorithm was tested by integrating it into a custom built, discrete event SystemC simulator. The results verify that incorporating into such a system yields tangible benefits in terms of the practical throughput achieved by the system through the minimization of idle time, while moving.
Neural representations and mechanisms for the performance of simple speech sequences
Bohland, Jason W.; Bullock, Daniel; Guenther, Frank H.
2010-01-01
Speakers plan the phonological content of their utterances prior to their release as speech motor acts. Using a finite alphabet of learned phonemes and a relatively small number of syllable structures, speakers are able to rapidly plan and produce arbitrary syllable sequences that fall within the rules of their language. The class of computational models of sequence planning and performance termed competitive queuing (CQ) models have followed Lashley (1951) in assuming that inherently parallel neural representations underlie serial action, and this idea is increasingly supported by experimental evidence. In this paper we develop a neural model that extends the existing DIVA model of speech production in two complementary ways. The new model includes paired structure and content subsystems (cf. MacNeilage, 1998) that provide parallel representations of a forthcoming speech plan, as well as mechanisms for interfacing these phonological planning representations with learned sensorimotor programs to enable stepping through multi-syllabic speech plans. On the basis of previous reports, the model’s components are hypothesized to be localized to specific cortical and subcortical structures, including the left inferior frontal sulcus, the medial premotor cortex, the basal ganglia and thalamus. The new model, called GODIVA (Gradient Order DIVA), thus fills a void in current speech research by providing formal mechanistic hypotheses about both phonological and phonetic processes that are grounded by neuroanatomy and physiology. This framework also generates predictions that can be tested in future neuroimaging and clinical case studies. PMID:19583476
Patient Safety Walkaround: a communication tool for the reallocation of health service resources
Ferorelli, Davide; Zotti, Fiorenza; Tafuri, Silvio; Pezzolla, Angela; Dell’Erba, Alessandro
2016-01-01
Abstract The study aims to evaluate the use of Patient Safety Walkaround (SWR) execution model in an Italian Hospital, through the adoption of parametric indices, survey tools, and process indicators. In the 1st meeting an interview was conducted to verify the knowledge of concepts of clinical risk management (process indicators). One month after, the questions provided by Frankel (survey tool) were administered. Each month after, an SWR has been carried trying to assist the healthcare professionals and collecting suggestions and solutions. Results have been classified according to Vincent model and analyzed to define an action plan. The amount of risk was quantified by the risk priority index (RPI). An organizational deficit concerns the management of the operating theatre. A state of intolerance was noticed of queuing patients for outpatient visits. The lack of scheduling of the operating rooms is often the cause of sudden displacements. A consequence is the conflict between patients and caregivers. Other causes of the increase of waiting times are the presence in the ward of a single trolley for medications and the presence of a single room for admission and preadmission of patients. Patients victims of allergic reactions have attributed such reactions to the presence of other patients in the process of acceptance and collection of medical history. All health professionals have reported the problem of n high number of relatives of the patients in the wards. Our study indicated the consistency of SWR as instrument to improve the quality of the care. PMID:27741109
Action and perception in literacy: A common-code for spelling and reading.
Houghton, George
2018-01-01
There is strong evidence that reading and spelling in alphabetical scripts depend on a shared representation (common-coding). However, computational models usually treat the two skills separately, producing a wide variety of proposals as to how the identity and position of letters is represented. This article treats reading and spelling in terms of the common-coding hypothesis for perception-action coupling. Empirical evidence for common representations in spelling-reading is reviewed. A novel version of the Start-End Competitive Queuing (SE-CQ) spelling model is introduced, and tested against the distribution of positional errors in Letter Position Dysgraphia, data from intralist intrusion errors in spelling to dictation, and dysgraphia because of nonperipheral neglect. It is argued that no other current model is equally capable of explaining this range of data. To pursue the common-coding hypothesis, the representation used in SE-CQ is applied, without modification, to the coding of letter identity and position for reading and lexical access, and a lexical matching rule for the representation is proposed (Start End Position Code model, SE-PC). Simulations show the model's compatibility with benchmark findings from form priming, its ability to account for positional effects in letter identification priming and the positional distribution of perseverative intrusion errors. The model supports the view that spelling and reading use a common orthographic description, providing a well-defined account of the major features of this representation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Extending the mirror neuron system model, II: what did I just do? A new role for mirror neurons.
Bonaiuto, James; Arbib, Michael A
2010-04-01
A mirror system is active both when an animal executes a class of actions (self-actions) and when it sees another execute an action of that class. Much attention has been given to the possible roles of mirror systems in responding to the actions of others but there has been little attention paid to their role in self-actions. In the companion article (Bonaiuto et al. Biol Cybern 96:9-38, 2007) we presented MNS2, an extension of the Mirror Neuron System model of the monkey mirror system trained to recognize the external appearance of its own actions as a basis for recognizing the actions of other animals when they perform similar actions. Here we further extend the study of the mirror system by introducing the novel hypotheses that a mirror system may additionally help in monitoring the success of a self-action and may also be activated by recognition of one's own apparent actions as well as efference copy from one's intended actions. The framework for this computational demonstration is a model of action sequencing, called augmented competitive queuing, in which action choice is based on the desirability of executable actions. We show how this "what did I just do?" function of mirror neurons can contribute to the learning of both executability and desirability which in certain cases supports rapid reorganization of motor programs in the face of disruptions.
Arbib, Michael A
2010-01-01
We develop the view that the involvement of mirror neurons in embodied experience grounds brain structures that underlie language, but that many other brain regions are involved. We stress the cooperation between the dorsal and ventral streams in praxis and language. Both have perceptual and motor schemas but the perceptual schemas in the dorsal path are affordances linked to specific motor schemas for detailed motor control, whereas the ventral path supports planning and decision making. This frames the hypothesis that the mirror system for words evolved from the mirror system for actions to support words-as-phonological-actions, with semantics provided by the linkage to neural systems supporting perceptual and motor schemas. We stress the importance of computational models which can be linked to the parametric analysis of data and conceptual analysis of these models to support new patterns of understanding of the data. In the domain of praxis, we assess the FARS model of the canonical system for grasping, the MNS models for the mirror system for grasping, and the Augmented Competitive Queuing model that extends the control of action to the opportunistic scheduling of action sequences and also offers a new hypothesis on the role of mirror neurons in self action. Turning to language, we use Construction Grammar as our linguistic framework to get beyond single words to phrases and sentences, and initiate analysis of what brain functions must complement mirror systems to support this functionality. 2009 Elsevier Inc. All rights reserved.
Practical Considerations of Moisture in Baled Biomass Feedstocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
William A. Smith; Ian J. Bonner; Kevin L. Kenney
2013-01-01
Agricultural residues make up a large portion of the immediately available biomass feedstock for renewable energy markets. Current collection and storage methods rely on existing feed and forage practices designed to preserve nutrients and properties of digestibility. Low-cost collection and storage practices that preserve carbohydrates across a range of inbound moisture contents are needed to assure the economic and technical success of the emerging biomass industry. This study examines the movement of moisture in storage and identifies patterns of migration resulting from several on-farm storage systems and their impacts on moisture measurement and dry matter recovery. Baled corn stover andmore » energy sorghum were stored outdoors in uncovered, tarp-covered, or wrapped stacks and sampled periodically to measure moisture and dry matter losses. Interpolation between discrete sampling locations in the stack improved bulk moisture content estimates and showed clear patterns of accumulation and re-deposition. Atmospheric exposure, orientation, and contact with barriers (i.e., soil, tarp, and wrap surfaces) were found to cause the greatest amount of moisture heterogeneity within stacks. Although the bulk moisture content of many stacks remained in the range suitable for aerobic stability, regions of high moisture were sufficient to support microbial activity, thus support dry matter loss. Stack configuration, orientation, and coverage methods are discussed relative to impact on moisture management and dry matter preservation. Additionally, sample collection and data analysis are discussed relative to assessment at the biorefinery as it pertains to stability in storage, queuing, and moisture carried into processing.« less
Models of Emergency Departments for Reducing Patient Waiting Times
Laskowski, Marek; McLeod, Robert D.; Friesen, Marcia R.; Podaima, Blake W.; Alfa, Attahiru S.
2009-01-01
In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial–topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed. PMID:19572015
Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior
NASA Astrophysics Data System (ADS)
Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui
2003-08-01
In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.
Adopting a child with cleft lip and palate: a study of parents' experiences.
Hansson, Emma; Ostman, Jenny; Becker, Magnus
2013-02-01
Adoption of Chinese children with cleft lip and palate (CLP) has become increasingly more common in Sweden. The aim of this study was to examine parents' experience when adopting a child with CLP. Since 2008, 34 adopted children with CLP have been treated in our department. A questionnaire was sent to 33 of the families and 30 of them answered (91%). The parents had queued from 1 month to 8 years before they were offered a child. Eighteen families reported that they received information on CLP from the adoption agency and 87% contacted the department of plastic surgery for additional information. In 15 cases (45%) previously unknown medical conditions or birth defects other than CLP were discovered in Sweden. Most parents (67%) had been informed before the adoption that their child could be a carrier of resistant bacteria, but not all had received enough information to grasp what it implies to be a carrier. The great majority of the families did not feel that the early hospitalisation for the first operation had a negative impact on the attachment between them and their adopted child. They thought that the aesthetic and functional results of the operations were "better than expected". Seventeen families stated that people react to the cleft and four of them think that the reactions are a problem. Presumptive adoptive parents should be informed that the child might have unsuspected medical conditions, resistant bacteria, what carriage implies, and that needed treatment and long-term results are not predictable.
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.
2015-05-01
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
Results from an evaluation of tobacco control policies at the 2010 Shanghai World Expo.
Li, Xiang; Zheng, PinPin; Fu, Hua; Berg, Carla; Kegler, Michelle
2013-09-01
Large-scale international events such as World Expos and Olympic Games have the potential to strengthen smoke-free norms globally. The Shanghai 2010 World Expo was one of the first large-scale events to implement and evaluate the adoption of strict tobacco control policies. To evaluate implementation of tobacco control policies at the 2010 World Expo in Shanghai, China. This mixed methods evaluation was conducted from July to October 2010. Observations were conducted in all 155 pavilions and outdoor queuing areas, all 45 souvenir shops, a random sample of restaurants (51 of 119) and selected outdoor non-smoking areas in all sections of the Expo. In addition, intercept surveys were completed with 3022 visitors over a 4-month period. All pavilions and souvenir shops were smoke-free. Restaurants were smoke-free, with only 0.1% of customers observed smoking. Smoking was more common in outdoor non-smoking areas, but still relatively rare overall with only 4.5% of visitors observed smoking. Tobacco products were not sold or marketed in any public settings except for three pavilions that had special exemptions from the policy. Overall, 80.3% of visitors were aware of the smoke-free policy at the World Expo, 92.5% of visitors supported the policy and 97.1% of visitors were satisfied with the smoke-free environment. Tobacco control policies at the World Expo sites were generally well-enforced and accepted although compliance was not 100%, particularly in outdoor non-smoking areas.
Autonomous Satellite Operations Via Secure Virtual Mission Operations Center
NASA Technical Reports Server (NTRS)
Miller, Eric; Paulsen, Phillip E.; Pasciuto, Michael
2011-01-01
The science community is interested in improving their ability to respond to rapidly evolving, transient phenomena via autonomous rapid reconfiguration, which derives from the ability to assemble separate but collaborating sensors and data forecasting systems to meet a broad range of research and application needs. Current satellite systems typically require human intervention to respond to triggers from dissimilar sensor systems. Additionally, satellite ground services often need to be coordinated days or weeks in advance. Finally, the boundaries between the various sensor systems that make up such a Sensor Web are defined by such things as link delay and connectivity, data and error rate asymmetry, data reliability, quality of service provisions, and trust, complicating autonomous operations. Over the past ten years, researchers from the NASA Glenn Research Center (GRC), General Dynamics, Surrey Satellite Technology Limited (SSTL), Cisco, Universal Space Networks (USN), the U.S. Geological Survey (USGS), the Naval Research Laboratory, the DoD Operationally Responsive Space (ORS) Office, and others have worked collaboratively to develop a virtual mission operations capability. Called VMOC (Virtual Mission Operations Center), this new capability allows cross-system queuing of dissimilar mission unique systems through the use of a common security scheme and published application programming interfaces (APIs). Collaborative VMOC demonstrations over the last several years have supported the standardization of spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of new tactics, techniques and procedures that lead to responsive space employment.
Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus
2007-01-01
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
SciDAC-Data, A Project to Enabling Data Driven Modeling of Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, M.; Ding, P.; Aliaga, L.
The SciDAC-Data project is a DOE funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab Data Center on the organization, movement, and consumption of High Energy Physics data. The project will analyze the analysis patterns and data organization that have been used by the NOvA, MicroBooNE, MINERvA and other experiments, to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations aremore » designed to address questions of data handling, cache optimization and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership class exascale computing facilities. We will address the use of the SciDAC-Data distributions acquired from Fermilab Data Center’s analysis workflows and corresponding to around 71,000 HEP jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in HPC environments. In particular we describe in detail how the Sequential Access via Metadata (SAM) data handling system in combination with the dCache/Enstore based data archive facilities have been analyzed to develop the radically different models of the analysis of HEP data. We present how the simulation may be used to analyze the impact of design choices in archive facilities.« less
NASA Astrophysics Data System (ADS)
Cristóbal-Hornillos, D.; Varela, J.; Ederoclite, A.; Vázquez Ramió, H.; López-Sainz, A.; Hernández-Fuertes, J.; Civera, T.; Muniesa, D.; Moles, M.; Cenarro, A. J.; Marín-Franch, A.; Yanes-Díaz, A.
2015-05-01
The Observatorio Astrofísico de Javalambre consists of two main telescopes: JST/T250, a 2.5 m telescope with a FoV of 3 deg, and JAST/T80, a 83 cm with a 2 deg FoV. JST/T250 will be devoted to complete the Javalambre-PAU Astronomical Survey (J-PAS). It is a photometric survey with a system of 54 narrow-band plus 3 broad-band filters covering an area of 8500°^2. The JAST/T80 will perform the J-PLUS survey, covering the same area in a system of 12 filters. This contribution presents the software and hardware architecture designed to store and process the data. The processing pipeline runs daily and it is devoted to correct instrumental signature on the science images, to perform astrometric and photometric calibration, and the computation of individual image catalogs. In a second stage, the pipeline performs the combination of the tile mosaics and the computation of final catalogs. The catalogs are ingested in as Scientific database to be provided to the community. The processing software is connected with a management database to store persistent information about the pipeline operations done on each frame. The processing pipeline is executed in a computing cluster under a batch queuing system. Regarding the storage system, it will combine disk and tape technologies. The disk storage system will have capacity to store the data that is accessed by the pipeline. The tape library will store and archive the raw data and earlier data releases with lower access frequency.
Wilber 3: A Python-Django Web Application For Acquiring Large-scale Event-oriented Seismic Data
NASA Astrophysics Data System (ADS)
Newman, R. L.; Clark, A.; Trabant, C. M.; Karstens, R.; Hutko, A. R.; Casey, R. E.; Ahern, T. K.
2013-12-01
Since 2001, the IRIS Data Management Center (DMC) WILBER II system has provided a convenient web-based interface for locating seismic data related to a particular event, and requesting a subset of that data for download. Since its launch, both the scale of available data and the technology of web-based applications have developed significantly. Wilber 3 is a ground-up redesign that leverages a number of public and open-source projects to provide an event-oriented data request interface with a high level of interactivity and scalability for multiple data types. Wilber 3 uses the IRIS/Federation of Digital Seismic Networks (FDSN) web services for event data, metadata, and time-series data. Combining a carefully optimized Google Map with the highly scalable SlickGrid data API, the Wilber 3 client-side interface can load tens of thousands of events or networks/stations in a single request, and provide instantly responsive browsing, sorting, and filtering of event and meta data in the web browser, without further reliance on the data service. The server-side of Wilber 3 is a Python-Django application, one of over a dozen developed in the last year at IRIS, whose common framework, components, and administrative overhead represent a massive savings in developer resources. Requests for assembled datasets, which may include thousands of data channels and gigabytes of data, are queued and executed using the Celery distributed Python task scheduler, giving Wilber 3 the ability to operate in parallel across a large number of nodes.
Synchrotron Imaging Computations on the Grid without the Computing Element
NASA Astrophysics Data System (ADS)
Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.
2011-12-01
Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.
NASA Technical Reports Server (NTRS)
Stehle, Roy H.; Ogier, Richard G.
1993-01-01
Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.
LWAs computational platform for e-consultation using mobile devices: cases from developing nations.
Olajubu, Emmanuel Ajayi; Odukoya, Oluwatoyin Helen; Akinboro, Solomon Adegbenro
2014-01-01
Mobile devices have been impacting on human standard of living by providing timely and accurate information anywhere and anytime through wireless media in developing nations. Shortage of experts in medical fields is very obvious throughout the whole world but more pronounced in developing nations. Thus, this study proposes a telemedicine platform for the vulnerable areas of developing nations. The vulnerable area are the interior with little or no medical facilities, hence the dwellers are very susceptible to sicknesses and diseases. The framework uses mobile devices that can run LightWeight Agents (LWAs) to send consultation requests to a remote medical expert in urban city from the vulnerable interiors. The feedback is conveyed to the requester through the same medium. The system architecture which contained AgenRoller, LWAs, The front-end (mobile devices) and back-end (the medical server) is presented. The algorithm for the software component of the architecture (AgenRoller) is also presented. The system is modeled as M/M/1/c queuing system, and simulated using Simevents from MATLAB Simulink environment. The simulation result presented show the average queue length, the number of entities in the queue and the number of entities departure from the system. These together present the rate of information processing in the system. A full scale development of this system with proper implementation will help extend the few medical facilities available in the urban cities in developing nations to the interiors thereby reducing the number of casualties in the vulnerable areas of the developing world especially in Sub Saharan Africa.
Outpatient Waiting Time in Health Services and Teaching Hospitals: A Case Study in Iran
Mohebbifar, Rafat; Hasanpoor, Edris; Mohseni, Mohammad; Sokhanvar, Mobin; Khosravizadeh, Omid; Isfahani, Haleh Mousavi
2014-01-01
Background: One of the most important indexes of the health care quality is patient’s satisfaction and it takes place only when there is a process based on management. One of these processes in the health care organizations is the appropriate management of the waiting time process. The aim of this study is the systematic analyzing of the outpatient waiting time. Methods: This descriptive cross sectional study conducted in 2011 is an applicable study performed in the educational and health care hospitals of one of the medical universities located in the north west of Iran. Since the distributions of outpatients in all the months were equal, sampling stage was used. 160 outpatients were studied and the data was analyzed by using SPSS software. Results: Results of the study showed that the waiting time for the outpatients of ophthalmology clinic with an average of 245 minutes for each patient allocated the maximum time among the other clinics for itself. Orthopedic clinic had the minimal waiting time including an average of 77 minutes per patient. The total average waiting time for each patient in the educational hospitals under this study was about 161 minutes. Conclusion: by applying some models, we can reduce the waiting time especially in the realm of time and space before the admission to the examination room. Utilizing the models including the one before admission, electronic visit systems via internet, a process model, six sigma model, queuing theory model and FIFO model, are the components of the intervention that reduces the outpatient waiting time. PMID:24373277
ST-analyzer: a web-based user interface for simulation trajectory analysis.
Jeong, Jong Cheol; Jo, Sunhwan; Wu, Emilia L; Qi, Yifei; Monje-Galvan, Viviana; Yeom, Min Sun; Gorenstein, Lev; Chen, Feng; Klauda, Jeffery B; Im, Wonpil
2014-05-05
Molecular dynamics (MD) simulation has become one of the key tools to obtain deeper insights into biological systems using various levels of descriptions such as all-atom, united-atom, and coarse-grained models. Recent advances in computing resources and MD programs have significantly accelerated the simulation time and thus increased the amount of trajectory data. Although many laboratories routinely perform MD simulations, analyzing MD trajectories is still time consuming and often a difficult task. ST-analyzer, http://im.bioinformatics.ku.edu/st-analyzer, is a standalone graphical user interface (GUI) toolset to perform various trajectory analyses. ST-analyzer has several outstanding features compared to other existing analysis tools: (i) handling various formats of trajectory files from MD programs, such as CHARMM, NAMD, GROMACS, and Amber, (ii) intuitive web-based GUI environment--minimizing administrative load and reducing burdens on the user from adapting new software environments, (iii) platform independent design--working with any existing operating system, (iv) easy integration into job queuing systems--providing options of batch processing either on the cluster or in an interactive mode, and (v) providing independence between foreground GUI and background modules--making it easier to add personal modules or to recycle/integrate pre-existing scripts utilizing other analysis tools. The current ST-analyzer contains nine main analysis modules that together contain 18 options, including density profile, lipid deuterium order parameters, surface area per lipid, and membrane hydrophobic thickness. This article introduces ST-analyzer with its design, implementation, and features, and also illustrates practical analysis of lipid bilayer simulations. Copyright © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Griffiths, Bradley Joseph
New supply chain management methods using radio frequency identification (RFID) and global positioning system (GPS) technology are quickly being adopted by companies as various inventory management benefits are being realized. For example, companies such as Nippon Yusen Kaisha (NYK) Logistics use the technology coupled with geospatial support systems for distributors to quickly find and manage freight containers. Traditional supply chain management methods require pen-to-paper reporting, searching inventory on foot, and human data entry. Some companies that prioritize supply chain management have not adopted the new technology, because they may feel that their traditional methods save the company expenses. This thesis serves as a pilot study that examines how information technology (IT) utilizing RFID and GPS technology can serve to increase workplace productivity, decrease human labor associated with inventorying, plus be used for spatial analysis by management. This pilot study represents the first attempt to couple RFID technology with Geographic Information Systems (GIS) in supply chain management efforts to analyze and locate mobile assets by exploring costs and benefits of implementation plus how the technology can be employed. This pilot study identified a candidate to implement a new inventory management method as XYZ Logistics, Inc. XYZ Logistics, Inc. is a fictitious company but represents a factual corporation. The name has been changed to provide the company with anonymity and to not disclose confidential business information. XYZ Logistics, Inc., is a nation-wide company that specializes in providing space solutions for customers including portable offices, storage containers, and customizable buildings.
Effect of smoke-free policies on the behaviour of social smokers.
Philpot, S J; Ryan, S A; Torre, L E; Wilcox, H M; Jalleh, G; Jamrozik, K
1999-01-01
To test the hypothesis that proposed amendments to the Occupational Safety and Health Act making all enclosed workplaces in Western Australia smoke free would result in a decrease in cigarette consumption by patrons at nightclubs, pubs, and restaurants without adversely affecting attendance. Cross sectional structured interview survey. Patrons of several inner city pubs and nightclubs in Perth were interviewed while queuing for admission to these venues. Current social habits, smoking habits; and how these might be affected by the proposed regulations. Persons who did not smoke daily were classified as "social smokers." Half (50%) of the 374 patrons interviewed were male, 51% currently did not smoke at all, 34.3% smoked every day, and the remaining 15.7% smoked, but not every day. A clear majority (62.5%) of all 374 respondents anticipated no change to the frequency of their patronage of hospitality venues if smoke-free policies became mandatory. One in five (19.3%) indicated that they would go out more often, and 18.2% said they would go out less often. Half (52%) of daily smokers anticipated no change to their cigarette consumption, while 44.5% of daily smokers anticipated a reduction in consumption. A majority of social smokers (54%) predicted a reduction in their cigarette consumption, with 42% of these anticipating quitting. One in nine (11.5%) of smokers say that adoption of smoke-free policies would prompt them to quit smoking entirely without a significant decrease in attendance at pubs and nightclubs. There can be few other initiatives as simple, cheap, and popular that would achieve so much for public health.
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as “smart” Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users. PMID:24678297
Results from an evaluation of tobacco control policies at the 2010 Shanghai World Expo
Li, Xiang; Zheng, PinPin; Fu, Hua; Berg, Carla; Kegler, Michelle
2013-01-01
Background Large-scale international events such as World Expos and Olympic Games have the potential to strengthen smoke-free norms globally. The Shanghai 2010 World Expo was one of the first large-scale events to implement and evaluate the adoption of strict tobacco control policies. Objective To evaluate implementation of tobacco control policies at the 2010 World Expo in Shanghai, China. Methods This mixed methods evaluation was conducted from July to October 2010. Observations were conducted in all 155 pavilions and outdoor queuing areas, all 45 souvenir shops, a random sample of restaurants (51 of 119) and selected outdoor non-smoking areas in all sections of the Expo. In addition, intercept surveys were completed with 3022 visitors over a 4-month period. Results All pavilions and souvenir shops were smoke-free. Restaurants were smoke-free, with only 0.1% of customers observed smoking. Smoking was more common in outdoor non-smoking areas, but still relatively rare overall with only 4.5% of visitors observed smoking. Tobacco products were not sold or marketed in any public settings except for three pavilions that had special exemptions from the policy. Overall, 80.3% of visitors were aware of the smoke-free policy at the World Expo, 92.5% of visitors supported the policy and 97.1% of visitors were satisfied with the smoke-free environment. Conclusions Tobacco control policies at the World Expo sites were generally well-enforced and accepted although compliance was not 100%, particularly in outdoor non-smoking areas. PMID:23708269
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Technical Reports Server (NTRS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-01-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining .the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning 1001 when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide Field Xray Telescope (WFXT), as part of an end to end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie
PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less
Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie; ...
2018-05-28
PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less
Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure
NASA Astrophysics Data System (ADS)
Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.
2014-12-01
Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?
Henry, Kevin; Wood, Nathan J.; Frazier, Tim G.
2017-01-01
Tsunami evacuation planning in coastal communities is typically focused on local events where at-risk individuals must move on foot in a matter of minutes to safety. Less attention has been placed on distant tsunamis, where evacuations unfold over several hours, are often dominated by vehicle use and are managed by public safety officials. Traditional traffic simulation models focus on estimating clearance times but often overlook the influence of varying population demand, alternative modes, background traffic, shadow evacuation, and traffic management alternatives. These factors are especially important for island communities with limited egress options to safety. We use the coastal community of Balboa Island, California (USA), as a case study to explore the range of potential clearance times prior to wave arrival for a distant tsunami scenario. We use a first-in–first-out queuing simulation environment to estimate variations in clearance times, given varying assumptions of the evacuating population (demand) and the road network over which they evacuate (supply). Results suggest clearance times are less than wave arrival times for a distant tsunami, except when we assume maximum vehicle usage for residents, employees, and tourists for a weekend scenario. A two-lane bridge to the mainland was the primary traffic bottleneck, thereby minimizing the effect of departure times, shadow evacuations, background traffic, boat-based evacuations, and traffic light timing on overall community clearance time. Reducing vehicular demand generally reduced clearance time, whereas improvements to road capacity had mixed results. Finally, failure to recognize non-residential employee and tourist populations in the vehicle demand substantially underestimated clearance time.
Emergence of bursts and communities in evolving weighted networks.
Jo, Hang-Hyun; Pan, Raj Kumar; Kaski, Kimmo
2011-01-01
Understanding the patterns of human dynamics and social interaction and the way they lead to the formation of an organized and functional society are important issues especially for techno-social development. Addressing these issues of social networks has recently become possible through large scale data analysis of mobile phone call records, which has revealed the existence of modular or community structure with many links between nodes of the same community and relatively few links between nodes of different communities. The weights of links, e.g., the number of calls between two users, and the network topology are found correlated such that intra-community links are stronger compared to the weak inter-community links. This feature is known as Granovetter's "The strength of weak ties" hypothesis. In addition to this inhomogeneous community structure, the temporal patterns of human dynamics turn out to be inhomogeneous or bursty, characterized by the heavy tailed distribution of time interval between two consecutive events, i.e., inter-event time. In this paper, we study how the community structure and the bursty dynamics emerge together in a simple evolving weighted network model. The principal mechanisms behind these patterns are social interaction by cyclic closure, i.e., links to friends of friends and the focal closure, links to individuals sharing similar attributes or interests, and human dynamics by task handling process. These three mechanisms have been implemented as a network model with local attachment, global attachment, and priority-based queuing processes. By comprehensive numerical simulations we show that the interplay of these mechanisms leads to the emergence of heavy tailed inter-event time distribution and the evolution of Granovetter-type community structure. Moreover, the numerical results are found to be in qualitative agreement with empirical analysis results from mobile phone call dataset.
Russo, Brendan J; Savolainen, Peter T; Gates, Timothy J; Kay, Jonathan J; Frazier, Sterling
2017-07-04
Although a considerable amount of prior research has investigated the impacts of speed limits on traffic safety and operations, much of this research, and nearly all of the research related to differential speed limits, has been specific to limited access freeways. The unique safety and operational issues on highways without access control create difficulty relating the conclusions from prior freeway-related speed limit research to 2-lane highways, particularly research on differential limits due to passing limitations and subsequent queuing. Therefore, the objective of this study was to assess differences in driver speed selection with respect to the posted speed limit on rural 2-lane highways, with a particular emphasis on the differences between uniform and differential speed limits. Data were collected from nearly 59,000 vehicles across 320 sites in Montana and 4 neighboring states. Differences in mean speeds, 85th percentile speeds, and the standard deviation in speeds for free-flowing vehicles were examined across these sites using ordinary least squares regression models. Ultimately, the results of the analysis show that the mean speed, 85th percentile speed, and variability in travel speeds for free-flowing vehicles on 2-lane highways are generally lower at locations with uniform 65 mph speed limits, compared to locations with differential limits of 70 mph for cars and 60 mph for trucks. In addition to posted speed limits, several site characteristics were shown to influence speed selection including shoulder widths, frequency of horizontal curves, percentage of the segment that included no passing zones, and hourly volumes. Differences in vehicle speed characteristics were also observed between states, indicating that speed selection may also be influenced by local factors, such as driver population or enforcement.
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; ...
2017-11-23
Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less
On-board closed-loop congestion control for satellite based packet switching networks
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Ivancic, William D.; Kim, Heechul
1993-01-01
NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time.
Ardern-Jones, Joanne; Hughes, Donald K; Rowe, Philip H; Mottram, David R; Green, Christopher F
2009-04-01
This study assessed the attitudes of Emergency Department (ED) staff regarding the introduction of an automated stock-control system. The objectives were to determine attitudes to stock control and replenishment, speed of access to the system, ease of use and the potential for future uses of the system. The study was carried out in the Countess of Chester Hospital NHS Foundation Trust (COCH) ED, which is attended by over 65,000 patients each year. All 68 ED staff were sent pre-piloted, semi-structured questionnaires and reminders, before and after automation of medicines stock control. Pre-implementation, 35 staff (66.1% of respondents) reported that problems occurred with access to medicine storage keys 'very frequently' or 'frequently'. Twenty-eight (52.8%) respondents 'agreed' or 'strongly agreed' that medicines were quickly accessed, which rose to 41 (77%) post-automation (P < 0.001). Improvement was reported in stock replenishment and storage of stock injections and oral medicines, but there were mixed opinions regarding storage of bulk fluids and refrigerated items. Twenty-seven (51.9%) staff reported access to the system within 1 min and 17 (32.7%) staff reported access within 1-2 min. The majority of staff found the system 'easy' or 'very easy' to use and there was a non-significant relationship between previous use of information technology and acceptance of the system. From a staff satisfaction perspective, automation improved medicines storage, security and stock control, and addressed the problem of searching for keys to storage areas. Concerns over familiarity with computers, queuing, speed of access and an improved audit trail do not appear to have been issues, when compared with the previous manual storage of medicines.
NASA Astrophysics Data System (ADS)
Wang, Li; Liu, Mao; Meng, Bo
2013-02-01
In China, both the mountainous areas and the number of people who live in mountain areas occupy a significant proportion. When production accidents or natural disasters happen, the residents in mountain areas should be evacuated and the evacuation is of obvious importance to public safety. But it is a pity that there are few studies on safety evacuation in rough terrain. The particularity of the complex terrain in mountain areas, however, makes it difficult to study pedestrian evacuation. In this paper, a three-dimensional surface cellular automata model is proposed to numerically simulate the real time dynamic evacuation of residents. The model takes into account topographic characteristics (the slope gradient) of the environment and the biomechanics characteristics (weight and leg extensor power) of the residents to calculate the walking speed. This paper only focuses on the influence of topography and the physiological parameters are defined as constants according to a statistical report. Velocity varies with the topography. In order to simulate the behavior of a crowd with varying movement velocities, and a numerical algorithm is used to determine the time step of iteration. By doing so, a numerical simulation can be conducted in a 3D surface CA model. Moreover, considering residents evacuation around a gas well in a mountain area as a case, a visualization system for a three-dimensional simulation of pedestrian evacuation is developed. In the simulation process, population behaviors of congestion, queuing and collision avoidance can be observed. The simulation results are explained reasonably. Therefore, the model presented in this paper can realize a 3D dynamic simulation of pedestrian evacuation vividly in complex terrain and predict the evacuation procedure and evacuation time required, which can supply some valuable information for emergency management.
NASA Astrophysics Data System (ADS)
Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi
2016-08-01
Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.
EON: software for long time simulations of atomic scale systems
NASA Astrophysics Data System (ADS)
Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme
2014-07-01
The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.
A generic method for evaluating crowding in the emergency department.
Eiset, Andreas Halgreen; Erlandsen, Mogens; Møllekær, Anders Brøns; Mackenhauer, Julie; Kirkegaard, Hans
2016-06-14
Crowding in the emergency department (ED) has been studied intensively using complicated non-generic methods that may prove difficult to implement in a clinical setting. This study sought to develop a generic method to describe and analyse crowding from measurements readily available in the ED and to test the developed method empirically in a clinical setting. We conceptualised a model with ED patient flow divided into separate queues identified by timestamps for predetermined events. With temporal resolution of 30 min, queue lengths were computed as Q(t + 1) = Q(t) + A(t) - D(t), with A(t) = number of arrivals, D(t) = number of departures and t = time interval. Maximum queue lengths for each shift of each day were found and risks of crowding computed. All tests were performed using non-parametric methods. The method was applied in the ED of Aarhus University Hospital, Denmark utilising an open cohort design with prospectively collected data from a one-year observation period. By employing the timestamps already assigned to the patients while in the ED, a generic queuing model can be computed from which crowding can be described and analysed in detail. Depending on availability of data, the model can be extended to include several queues increasing the level of information. When applying the method empirically, 41,693 patients were included. The studied ED had a high risk of bed occupancy rising above 100 % during day and evening shift, especially on weekdays. Further, a 'carry over' effect was shown between shifts and days. The presented method offers an easy and generic way to get detailed insight into the dynamics of crowding in an ED.
2014-01-01
Background As a part of nationwide healthcare reforms, the Chinese government launched web-based appointment systems (WAS) to provide a solution to problems around outpatient appointments and services. These have been in place in all Chinese public tertiary hospitals since 2009. Methods Questionnaires were collected from both patients and doctors in one large tertiary public hospital in Shanghai, China.Data were analyzed to measure their satisfaction and views about the WAS. Results The 1000 outpatients randomly selected for the survey were least satisfied about the waiting time to see a doctor. Even though the WAS provided a much more convenient booking method, only 17% of patients used it. Of the 197 doctors surveyed, over 90% thought it was necessary to provide alternative forms of appointment booking systems for outpatients. However, about 80% of those doctors who were not associated professors would like to provide an ‘on-the-spot’ appointment option, which would lead to longer waits for patients. Conclusions Patients were least satisfied about the waiting times. To effectively reduce appointment-waiting times is therefore an urgent issue. Despite the benefits of using the WAS, most patients still registered via the usual method of queuing, suggesting that hospitals and health service providers should promote and encourage the use of the WAS. Furthermore, Chinese health providers need to help doctors to take others’ opinions or feedback into consideration when treating patients to minimize the gap between patients’ and doctors’ opinions. These findings may provide useful information for both practitioners and regulators, and improve recognition of this efficient and useful booking system, which may have far-reaching and positive implications for China’s ongoing reforms. PMID:24912568
VML 3.0 Reactive Sequencing Objects and Matrix Math Operations for Attitude Profiling
NASA Technical Reports Server (NTRS)
Grasso, Christopher A.; Riedel, Joseph E.
2012-01-01
VML (Virtual Machine Language) has been used as the sequencing flight software on over a dozen JPL deep-space missions, most recently flying on GRAIL and JUNO. In conjunction with the NASA SBIR entitled "Reactive Rendezvous and Docking Sequencer", VML version 3.0 has been enhanced to include object-oriented element organization, built-in queuing operations, and sophisticated matrix / vector operations. These improvements allow VML scripts to easily perform much of the work that formerly would have required a great deal of expensive flight software development to realize. Autonomous turning and tracking makes considerable use of new VML features. Profiles generated by flight software are managed using object-oriented VML data constructs executed in discrete time by the VML flight software. VML vector and matrix operations provide the ability to calculate and supply quaternions to the attitude controller flight software which produces torque requests. Using VML-based attitude planning components eliminates flight software development effort, and reduces corresponding costs. In addition, the direct management of the quaternions allows turning and tracking to be tied in with sophisticated high-level VML state machines. These state machines provide autonomous management of spacecraft operations during critical tasks like a hypothetic Mars sample return rendezvous and docking. State machines created for autonomous science observations can also use this sort of attitude planning system, allowing heightened autonomy levels to reduce operations costs. VML state machines cannot be considered merely sequences - they are reactive logic constructs capable of autonomous decision making within a well-defined domain. The state machine approach enabled by VML 3.0 is progressing toward flight capability with a wide array of applicable mission activities.
Productivity improvement through cycle time analysis
NASA Astrophysics Data System (ADS)
Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio
1996-09-01
A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.
SkyProbe, monitoring the absolute atmospheric transmission in the optical
NASA Astrophysics Data System (ADS)
Cuillandre, Jean-charles; Magnier, Eugene; Mahoney, William
2011-03-01
Mauna Kea is known for its pristine seeing conditions, but sky transparency can be an issue for science operations since 25% of the night are not photometric, mostly due to high-altitude cirrus. Since 2001, the original single-channel SkyProbe has gathered one exposure every minute during each observing night using a small CCD camera with a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tychos catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). A key advantage of SkyProbe over direct thermal infrared imaging detection of clouds, is that it allows an accurate absolute measurement, within 5%, of the true atmospheric absorption by clouds affecting the data being gathered by the telescope's main science instrument. This system has proven crucial for decision making in the CFHT queued service observing (QSO), representing today 80% of the telescope time: science exposures taken in non-photometric conditions are automatically registered for being re-observed later on (at 1/10th of the original exposure time per pointing in the observed filters) to ensure a proper final absolute photometric calibration. The new dual color system (simultaneous B&V bands) will allow a better characterization of the sky properties atop Mauna Kea and will enable a better detection of the thinner cirrus (absorption down to 0.02 mag., i.e. 2%). SkyProbe is operated within the Elixir pipeline, a collection of tools used for handling the CFHT CCD mosaics (CFH12K and MegaCam), from data pre-processing to astrometric and photometric calibration.
Sojda, R.S.
2007-01-01
Decision support systems are often not empirically evaluated, especially the underlying modelling components. This can be attributed to such systems necessarily being designed to handle complex and poorly structured problems and decision making. Nonetheless, evaluation is critical and should be focused on empirical testing whenever possible. Verification and validation, in combination, comprise such evaluation. Verification is ensuring that the system is internally complete, coherent, and logical from a modelling and programming perspective. Validation is examining whether the system is realistic and useful to the user or decision maker, and should answer the question: “Was the system successful at addressing its intended purpose?” A rich literature exists on verification and validation of expert systems and other artificial intelligence methods; however, no single evaluation methodology has emerged as preeminent. At least five approaches to validation are feasible. First, under some conditions, decision support system performance can be tested against a preselected gold standard. Second, real-time and historic data sets can be used for comparison with simulated output. Third, panels of experts can be judiciously used, but often are not an option in some ecological domains. Fourth, sensitivity analysis of system outputs in relation to inputs can be informative. Fifth, when validation of a complete system is impossible, examining major components can be substituted, recognizing the potential pitfalls. I provide an example of evaluation of a decision support system for trumpeter swan (Cygnus buccinator) management that I developed using interacting intelligent agents, expert systems, and a queuing system. Predicted swan distributions over a 13-year period were assessed against observed numbers. Population survey numbers and banding (ringing) studies may provide long term data useful in empirical evaluation of decision support.
Las Cumbres Observatory Global Telescope Network: Keeping Education in the Dark
NASA Astrophysics Data System (ADS)
Ross, Rachel J.
2007-12-01
Las Cumbres Observatory Global Telescope Network is a non-profit organization that is building a completely robotic network of telescopes for education (24 x 0.4m, clusters of 4) and science (18 x 1.0m, clusters of 3 and 2 x 2.0 meters) which will be longitudinally spaced so there will always be at least one cluster in the dark. The network will be completely accessible online with observations being completed in either real-time or queued-based modes. The network will also have the ability to complete very long observations of all kinds of variable objects and include a rapid response system will allow the telescopes to quickly slew to unexpected phenomena and provide around-the-clock monitoring. Students will be able to do research projects using and collecting data from both the long observations (e.g. extrasolar planet follow-up, variable star light curves, etc.) and the quick response (e.g. supernovae, GRBs, etc.), as well as use their own ideas to create personalized projects. Also available online will be a huge archive of data and the ability to use online software to process it. A large library of activities and resources will be available for all age groups and levels of science. LCOGTN will work cooperatively with international organizations to bring a vast amount of knowledge and experience together to create a world class program. Through these collaborations, pilots have already been started in a few European countries, as well as trial programs involving schools partnered between the USA and UK. LCOGTN's education network will provide an avenue for educators and learners to use cutting edge technology to do real science. All you need is a broadband internet connection, computer, and lots of enthusiasm and imagination.
NASA Astrophysics Data System (ADS)
Cuillandre, J.-C.; Magnier, E.; Sabin, D.; Mahoney, B.
2016-05-01
Mauna Kea is known for its pristine seeing conditions but sky transparency can be an issue for science operations since at least 25% of the observable (i.e. open dome) nights are not photometric, an effect mostly due to high-altitude cirrus. Since 2001, the original single channel SkyProbe mounted in parallel on the Canada-France-Hawaii Telescope (CFHT) has gathered one V-band exposure every minute during each observing night using a small CCD camera offering a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tycho catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). The measurement of the true atmospheric absorption is achieved within 2%, a key advantage over all-sky direct thermal infrared imaging detection of clouds. The absolute measurement of the true atmospheric absorption by clouds and particulates affecting the data being gathered by the telescope's main science instrument has proven crucial for decision making in the CFHT queued service observing (QSO) representing today all of the telescope time. Also, science exposures taken in non-photometric conditions are automatically registered for a new observation at a later date at 1/10th of the original exposure time in photometric conditions to ensure a proper final absolute photometric calibration. Photometric standards are observed only when conditions are reported as being perfectly stable by SkyProbe. The more recent dual color system (simultaneous B & V bands) will offer a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinnest cirrus (absorption down to 0.01 mag., or 1%).
Real-Time Multimission Event Notification System for Mars Relay
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.
2013-01-01
As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.
Design of object-oriented distributed simulation classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D. (Principal Investigator)
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Design of Object-Oriented Distributed Simulation Classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
On-demand Simulation of Atmospheric Transport Processes on the AlpEnDAC Cloud
NASA Astrophysics Data System (ADS)
Hachinger, S.; Harsch, C.; Meyer-Arnek, J.; Frank, A.; Heller, H.; Giemsa, E.
2016-12-01
The "Alpine Environmental Data Analysis Centre" (AlpEnDAC) develops a data-analysis platform for high-altitude research facilities within the "Virtual Alpine Observatory" project (VAO). This platform, with its web portal, will support use cases going much beyond data management: On user request, the data are augmented with "on-demand" simulation results, such as air-parcel trajectories for tracing down the source of pollutants when they appear in high concentration. The respective back-end mechanism uses the Compute Cloud of the Leibniz Supercomputing Centre (LRZ) to transparently calculate results requested by the user, as far as they have not yet been stored in AlpEnDAC. The queuing-system operation model common in supercomputing is replaced by a model in which Virtual Machines (VMs) on the cloud are automatically created/destroyed, providing the necessary computing power immediately on demand. From a security point of view, this allows to perform simulations in a sandbox defined by the VM configuration, without direct access to a computing cluster. Within few minutes, the user receives conveniently visualized results. The AlpEnDAC infrastructure is distributed among two participating institutes [front-end at German Aerospace Centre (DLR), simulation back-end at LRZ], requiring an efficient mechanism for synchronization of measured and augmented data. We discuss our iRODS-based solution for these data-management tasks as well as the general AlpEnDAC framework. Our cloud-based offerings aim at making scientific computing for our users much more convenient and flexible than it has been, and to allow scientists without a broad background in scientific computing to benefit from complex numerical simulations.
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
NASA Astrophysics Data System (ADS)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert
2017-10-01
The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.
NASA Astrophysics Data System (ADS)
Zhang, Qunfang; Zhu, Yifang
2010-01-01
Increasing evidence has demonstrated toxic effects of vehicular emitted ultrafine particles (UFPs, diameter < 100 nm), with the highest human exposure usually occurring on and near roadways. Children are particularly at risk due to immature respiratory systems and faster breathing rates. In this study, children's exposure to in-cabin air pollutants, especially UFPs, was measured inside four diesel-powered school buses. Two 1990 and two 2006 model year diesel-powered school buses were selected to represent the age extremes of school buses in service. Each bus was driven on two routine bus runs to study school children's exposure under different transportation conditions in South Texas. The number concentration and size distribution of UFPs, total particle number concentration, PM 2.5, PM 10, black carbon (BC), CO, and CO 2 levels were monitored inside the buses. The average total particle number concentrations observed inside the school buses ranged from 7.3 × 10 3 to 3.4 × 10 4 particles cm -3, depending on engine age and window position. When the windows were closed, the in-cabin air pollutants were more likely due to the school buses' self-pollution. The 1990 model year school buses demonstrated much higher air pollutant concentrations than the 2006 model year ones. When the windows were open, the majority of in-cabin air pollutants came from the outside roadway environment with similar pollutant levels observed regardless of engine ages. The highest average UFP concentration was observed at a bus transfer station where approximately 27 idling school buses were queued to load or unload students. Starting-up and idling generated higher air pollutant levels than the driving state. Higher in-cabin air pollutant concentrations were observed when more students were on board.
Standfield, L B; Comans, T A; Scuffham, P A
2017-01-01
To empirically compare Markov cohort modeling (MM) and discrete event simulation (DES) with and without dynamic queuing (DQ) for cost-effectiveness (CE) analysis of a novel method of health services delivery where capacity constraints predominate. A common data-set comparing usual orthopedic care (UC) to an orthopedic physiotherapy screening clinic and multidisciplinary treatment service (OPSC) was used to develop a MM and a DES without (DES-no-DQ) and with DQ (DES-DQ). Model results were then compared in detail. The MM predicted an incremental CE ratio (ICER) of $495 per additional quality-adjusted life-year (QALY) for OPSC over UC. The DES-no-DQ showed OPSC dominating UC; the DES-DQ generated an ICER of $2342 per QALY. The MM and DES-no-DQ ICER estimates differed due to the MM having implicit delays built into its structure as a result of having fixed cycle lengths, which are not a feature of DES. The non-DQ models assume that queues are at a steady state. Conversely, queues in the DES-DQ develop flexibly with supply and demand for resources, in this case, leading to different estimates of resource use and CE. The choice of MM or DES (with or without DQ) would not alter the reimbursement of OPSC as it was highly cost-effective compared to UC in all analyses. However, the modeling method may influence decisions where ICERs are closer to the CE acceptability threshold, or where capacity constraints and DQ are important features of the system. In these cases, DES-DQ would be the preferred modeling technique to avoid incorrect resource allocation decisions.
Sentry: An Automated Close Approach Monitoring System for Near-Earth Objects
NASA Astrophysics Data System (ADS)
Chamberlin, A. B.; Chesley, S. R.; Chodas, P. W.; Giorgini, J. D.; Keesey, M. S.; Wimberly, R. N.; Yeomans, D. K.
2001-11-01
In response to international concern about potential asteroid impacts on Earth, NASA's Near-Earth Object (NEO) Program Office has implemented a new system called ``Sentry'' to automatically update the orbits of all NEOs on a daily basis and compute Earth close approaches up to 100 years into the future. Results are published on our web site (http://neo.jpl.nasa.gov/) and updated orbits and ephemerides made available via the JPL Horizons ephemeris service (http://ssd.jpl.nasa.gov/horizons.html). Sentry collects new and revised astrometric observations from the Minor Planet Center (MPC) via their electronic circulars (MPECs) in near real time as well as radar and optical astrometry sent directly from observers. NEO discoveries and identifications are detected in MPECs and processed appropriately. In addition to these daily updates, Sentry synchronizes with each monthly batch of MPC astrometry and automatically updates all NEO observation files. Daily and monthly processing of NEO astrometry is managed using a queuing system which allows for manual intervention of selected NEOs without interfering with the automatic system. At the heart of Sentry is a fully automatic orbit determination program which handles outlier rejection and ensures convergence in the new solution. Updated orbital elements and their covariances are published via Horizons and our NEO web site, typically within 24 hours. A new version of Horizons, in development, will allow computation of ephemeris uncertainties using covariance data. The positions of NEOs with updated orbits are numerically integrated up to 100 years into the future and each close approach to any perturbing body in our dynamic model (all planets, Moon, Ceres, Pallas, Vesta) is recorded. Significant approaches are flagged for extended analysis including Monte Carlo studies. Results, such as minimum encounter distances and future Earth impact probabilities, are published on our NEO web site.
Efficient provisioning for multi-core applications with LSF
NASA Astrophysics Data System (ADS)
Dal Pra, Stefano
2015-12-01
Tier-1 sites providing computing power for HEP experiments are usually tightly designed for high throughput performances. This is pursued by reducing the variety of supported use cases and tuning for performances those ones, the most important of which have been that of singlecore jobs. Moreover, the usual workload is saturation: each available core in the farm is in use and there are queued jobs waiting for their turn to run. Enabling multi-core jobs thus requires dedicating a number of hosts where to run, and waiting for them to free the needed number of cores. This drain-time introduces a loss of computing power driven by the number of unusable empty cores. As an increasing demand for multi-core capable resources have emerged, a Task Force have been constituted in WLCG, with the goal to define a simple and efficient multi-core resource provisioning model. This paper details the work done at the INFN Tier-1 to enable multi-core support for the LSF batch system, with the intent of reducing to the minimum the average number of unused cores. The adopted strategy has been that of dedicating to multi-core a dynamic set of nodes, whose dimension is mainly driven by the number of pending multi-core requests and fair-share priority of the submitting user. The node status transition, from single to multi core et vice versa, is driven by a finite state machine which is implemented in a custom multi-core director script, running in the cluster. After describing and motivating both the implementation and the details specific to the LSF batch system, results about performance are reported. Factors having positive and negative impact on the overall efficiency are discussed and solutions to reduce at most the negative ones are proposed.
Food insecurity and diabetes self-management among food pantry clients.
Ippolito, Matthew M; Lyles, Courtney R; Prendergast, Kimberly; Marshall, Michelle Berger; Waxman, Elaine; Seligman, Hilary Kessler
2017-01-01
To examine the association between level of food security and diabetes self-management among food pantry clients, which is largely not possible using clinic-based sampling methods. Cross-sectional descriptive study. Community-based food pantries in California, Ohio and Texas, USA, from March 2012 through March 2014. Convenience sample of adults with diabetes queuing at pantries (n 1237; 83 % response). Sampled adults were stratified as food secure, low food secure or very low food secure. We used point-of-care glycated Hb (HbA1c) testing to determine glycaemic control and captured diabetes self-management using validated survey items. The sample was 70 % female, 55 % Latino/Hispanic, 25 % white and 10 % black/African American, with a mean age of 56 years. Eighty-four per cent were food insecure, one-half of whom had very low food security. Mean HbA1c was 8·1 % and did not vary significantly by food security status. In adjusted models, very-low-food-secure participants, compared with both low-food-secure and food-secure participants, had poorer diabetes self-efficacy, greater diabetes distress, greater medication non-adherence, higher prevalence of severe hypoglycaemic episodes, higher prevalence of depressive symptoms, more medication affordability challenges, and more food and medicine or health supply trade-offs. Few studies of the health impact of food security have been able to examine very low food security. In a food pantry sample with high rates of food insecurity, we found that diabetes self-management becomes increasingly difficult as food security worsens. The efficacy of interventions to improve diabetes self-management may increase if food security is simultaneously addressed.
Underworld - Bringing a Research Code to the Classroom
NASA Astrophysics Data System (ADS)
Moresi, L. N.; Mansour, J.; Giordani, J.; Farrington, R.; Kaluza, O.; Quenette, S.; Woodcock, R.; Squire, G.
2017-12-01
While there are many reasons to celebrate the passing of punch card programming and flickering green screens,the loss of the sense of wonder at the very existence of computers and the calculations they make possible shouldnot be numbered among them. Computers have become so familiar that students are often unaware that formal and careful design of algorithms andtheir implementations remains a valuable and important skill that has to be learned and practiced to achieveexpertise and genuine understanding. In teaching geodynamics and geophysics at undergraduate level, we aimed to be able to bring our researchtools into the classroom - even when those tools are advanced, parallel research codes that we typically deploy on hundredsor thousands of processors, and we wanted to teach not just the physical concepts that are modelled by these codes but asense of familiarity with computational modelling and the ability to discriminate a reliable model from a poor one. The underworld code (www.underworldcode.org) was developed for modelling plate-scale fluid mechanics and studyingproblems in lithosphere dynamics. Though specialised for this task, underworld has a straightforwardpython user interface that allows it to run within the environment of jupyter notebooks on a laptop (at modest resolution, of course).The python interface was developed for adaptability in addressing new research problems, but also lends itself to integration intoa python-driven learning environment. To manage the heavy demands of installing and running underworld in a teaching laboratory, we have developed a workflow in whichwe install docker containers in the cloud which support a number of students to run their own environment independently. We share ourexperience blending notebooks and static webpages into a single web environment, and we explain how we designed our graphics andanalysis tools to allow notebook "scripts" to be queued and run on a supercomputer.
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo
Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Astrophysics Data System (ADS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-07-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X-ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning tool when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide-Field Xray Telescope (WFXT), as part of an end-to-end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control. This work was funded by AISRP grant NNG06GE59G.
Williamson, Ann; Friswell, Rena
2013-09-01
The aim of this study was to explore the effects of external influences on long distance trucking, in particular, incentive-based remuneration systems and the need to wait or queue to load or unload on driver experiences of fatigue. Long distance truck drivers (n=475) were recruited at truck rest stops on the major transport corridors within New South Wales, Australia and asked to complete a survey by self-administration or interview. The survey covered demographics, usual working arrangements, details of the last trip and safety outcomes including fatigue experiences. On average drivers' last trip was over 2000 km and took 21.5 h to complete with an additional 6h of non-driving work. Incentive payments were associated with longer working hours, greater distances driven and higher fatigue for more drivers. Drivers required to wait in queues did significantly more non-driving work and experienced fatigue more often than those who did not. Drivers who were not paid to wait did the longest trips with average weekly hours above the legal working hours limits, had the highest levels of fatigue and the highest levels of interference by work with family life. In contrast, drivers who were paid to wait did significantly less work with shorter usual hours and shorter last trips. Multivariate analysis showed that incentive-based payment and unpaid waiting in queues were significant predictors of driver fatigue. The findings suggest that mandating payment of drivers for non-driving work including waiting would reduce the amount of non-driving work required for drivers and reduce weekly hours of work. In turn this would reduce driver fatigue and safety risk as well as enhancing the efficiency of the long distance road transport industry. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimizing a Drone Network to Deliver Automated External Defibrillators.
Boutilier, Justin J; Brooks, Steven C; Janmohamed, Alyf; Byers, Adam; Buick, Jason E; Zhan, Cathy; Schoellig, Angela P; Cheskes, Sheldon; Morrison, Laurie J; Chan, Timothy C Y
2017-06-20
Public access defibrillation programs can improve survival after out-of-hospital cardiac arrest, but automated external defibrillators (AEDs) are rarely available for bystander use at the scene. Drones are an emerging technology that can deliver an AED to the scene of an out-of-hospital cardiac arrest for bystander use. We hypothesize that a drone network designed with the aid of a mathematical model combining both optimization and queuing can reduce the time to AED arrival. We applied our model to 53 702 out-of-hospital cardiac arrests that occurred in the 8 regions of the Toronto Regional RescuNET between January 1, 2006, and December 31, 2014. Our primary analysis quantified the drone network size required to deliver an AED 1, 2, or 3 minutes faster than historical median 911 response times for each region independently. A secondary analysis quantified the reduction in drone resources required if RescuNET was treated as a large coordinated region. The region-specific analysis determined that 81 bases and 100 drones would be required to deliver an AED ahead of median 911 response times by 3 minutes. In the most urban region, the 90th percentile of the AED arrival time was reduced by 6 minutes and 43 seconds relative to historical 911 response times in the region. In the most rural region, the 90th percentile was reduced by 10 minutes and 34 seconds. A single coordinated drone network across all regions required 39.5% fewer bases and 30.0% fewer drones to achieve similar AED delivery times. An optimized drone network designed with the aid of a novel mathematical model can substantially reduce the AED delivery time to an out-of-hospital cardiac arrest event. © 2017 American Heart Association, Inc.
Implementation and use of a highly available and innovative IaaS solution: the Cloud Area Padovana
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Biasotto, M.; Dal Pra, S.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Frizziero, E.; Gulmini, M.; Michelotto, M.; Sgaravatto, M.; Traldi, S.; Venaruzzo, M.; Verlato, M.; Zangrando, L.
2015-12-01
While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of open source adoption has led to the choice of products like OpenStack by many organizations. We describe a use case of the Italian National Institute for Nuclear Physics (INFN) which resulted in the implementation of a unique cloud service, called ’Cloud Area Padovana’, which encompasses resources spread over two different sites: the INFN Legnaro National Laboratories and the INFN Padova division. We describe how this IaaS has been implemented, which technologies have been adopted and how services have been configured in high-availability (HA) mode. We also discuss how identity and authorization management were implemented, adopting a widely accepted standard architecture based on SAML2 and OpenID: by leveraging the versatility of those standards the integration with authentication federations like IDEM was implemented. We also discuss some other innovative developments, such as a pluggable scheduler, implemented as an extension of the native OpenStack scheduler, which allows the allocation of resources according to a fair-share based model and which provides a persistent queuing mechanism for handling user requests that can not be immediately served. Tools, technologies, procedures used to install, configure, monitor, operate this cloud service are also discussed. Finally we present some examples that show how this IaaS infrastructure is being used.
NASA Astrophysics Data System (ADS)
Patti, Andrew; Tan, Wai-tian; Shen, Bo
2007-09-01
Streaming video in consumer homes over wireless IEEE 802.11 networks is becoming commonplace. Wireless 802.11 networks pose unique difficulties for streaming high definition (HD), low latency video due to their error-prone physical layer and media access procedures which were not designed for real-time traffic. HD video streaming, even with sophisticated H.264 encoding, is particularly challenging due to the large number of packet fragments per slice. Cross-layer design strategies have been proposed to address the issues of video streaming over 802.11. These designs increase streaming robustness by imposing some degree of monitoring and control over 802.11 parameters from application level, or by making the 802.11 layer media-aware. Important contributions are made, but none of the existing approaches directly take the 802.11 queuing into account. In this paper we take a different approach and propose a cross-layer design allowing direct, expedient control over the wireless packet queue, while obtaining timely feedback on transmission status for each packet in a media flow. This method can be fully implemented on a media sender with no explicit support or changes required to the media client. We assume that due to congestion or deteriorating signal-to-noise levels, the available throughput may drop substantially for extended periods of time, and thus propose video source adaptation methods that allow matching the bit-rate to available throughput. A particular H.264 slice encoding is presented to enable seamless stream switching between streams at multiple bit-rates, and we explore using new computationally efficient transcoding methods when only a high bit-rate stream is available.
ROME (Request Object Management Environment)
NASA Astrophysics Data System (ADS)
Kong, M.; Good, J. C.; Berriman, G. B.
2005-12-01
Most current astronomical archive services are based on an HTML/ CGI architecture where users submit HTML forms via a browser and CGI programs operating under a web server process the requests. Most services return an HTML result page with URL links to the result files or, for longer jobs, return a message indicating that email will be sent when the job is done. This paradigm has a few serious shortcomings. First, it is all too common for something to go wrong and for the user to never hear about the job again. Second, for long and complicated jobs there is often important intermediate information that would allow the user to adjust the processing. Finally, unless some sort of custom queueing mechanism is used, background jobs are started immediately upon receiving the CGI request. When there are many such requests the server machine can easily be overloaded and either slow to a crawl or crash. Request Object Management Environment (ROME) is a collection of middleware components being developed under the National Virtual Observatory Project to provide mechanism for managing long jobs such as computationally intensive statistical analysis requests or the generation of large scale mosaic images. Written as EJB objects within the open-source JBoss applications server, ROME receives processing requests via a servelet interface, stores them in a DBMS using JDBC, distributes the processing (via queuing mechanisms) across multiple machines and environments (including Grid resources), manages realtime messages from the processing modules, and ensures proper user notification. The request processing modules are identical in structure to standard CGI-programs -- though they can optionally implement status messaging -- and can be written in any language. ROME will persist these jobs across failures of processing modules, network outages, and even downtime of ROME and the DBMS, restarting them as necessary.
Halasa, Tariq; Boklund, Anette
2014-01-01
The objectives of this study were to assess whether current surveillance capacity is sufficient to fulfill EU and Danish regulations to control a hypothetical foot-and-mouth disease (FMD) epidemic in Denmark, and whether enlarging the protection and/or surveillance zones could minimize economic losses. The stochastic spatial simulation model DTU-DADS was further developed to simulate clinical surveillance of herds within the protection and surveillance zones and used to model spread of FMD between herds. A queuing system was included in the model, and based on daily surveillance capacity, which was 450 herds per day, it was decided whether herds appointed for surveillance would be surveyed on the current day or added to the queue. The model was run with a basic scenario representing the EU and Danish regulations, which includes a 3 km protection and 10 km surveillance zone around detected herds. In alternative scenarios, the protection zone was enlarged to 5 km, the surveillance zone was enlarged to 15 or 20 km, or a combined enlargement of the protection and surveillance zones was modelled. Sensitivity analysis included changing surveillance capacity to 200, 350 or 600 herds per day, frequency of repeated visits for herds in overlapping surveillance zones from every 14 days to every 7, 21 and 30 days, and the size of the zones combined with a surveillance capacity increased to 600 herds per day. The results showed that the default surveillance capacity is sufficient to survey herds on time. Extra resources for surveillance did not improve the situation, but fewer resources could result in larger epidemics and costs. Enlarging the protection zone was a better strategy than the basic scenario. Despite that enlarging the surveillance zone might result in shorter epidemic duration, and lower number of affected herds, it resulted frequently in larger economic losses. PMID:25014351
Protogyny in a tropical damselfish: females queue for future benefit.
McCormick, Mark I
2016-01-01
Membership of the group is a balance between the benefits associated with group living and the cost of socially constrained growth and breeding opportunities, but the costs and benefits are seldom examined. The goal of the present study was to explore the trade-offs associated with group living for a sex-changing, potentially protogynous coral reef fish, the Ambon damselfish, Pomacentrus amboinensis. Extensive sampling showed that the species exhibits resource defence polygyny, where dominant males guard a nest site that is visited by females. P. amboinensis have a longevity of about 6.5 years on the northern Great Barrier Reef. While the species can change sex consistent with being a protogynous hermaphrodite, it is unclear the extent to which the species uses this capability. Social groups are comprised of one reproductive male, 1-7 females and a number of juveniles. Females live in a linear dominance hierarchy, with the male being more aggressive to the beta-female than the alpha-female, who exhibits lower levels of ovarian cortisol. Surveys and a tagging study indicated that groups were stable for at least three months. A passive integrated transponder tag study showed that males spawn with females from their own group, but also females from neighbouring groups. In situ behavioural observations found that alpha-females have priority of access to the nest site that the male guarded, and access to higher quality foraging areas. Male removal studies suggest that the alpha-females can change sex to take over from the male when the position becomes available. Examination of otolith microstructure showed that those individuals which change sex to males have different embryonic characteristics at hatching, suggesting that success may involve a component that is parentally endowed. The relative importance of parental effects and social organisation in affecting the importance of female queuing is yet to be studied, but will likely depend on the strength of social control by the dominant members of the group.
Xiong, Jie; Zhou, Tong
2012-01-01
An important problem in systems biology is to reconstruct gene regulatory networks (GRNs) from experimental data and other a priori information. The DREAM project offers some types of experimental data, such as knockout data, knockdown data, time series data, etc. Among them, multifactorial perturbation data are easier and less expensive to obtain than other types of experimental data and are thus more common in practice. In this article, a new algorithm is presented for the inference of GRNs using the DREAM4 multifactorial perturbation data. The GRN inference problem among [Formula: see text] genes is decomposed into [Formula: see text] different regression problems. In each of the regression problems, the expression level of a target gene is predicted solely from the expression level of a potential regulation gene. For different potential regulation genes, different weights for a specific target gene are constructed by using the sum of squared residuals and the Pearson correlation coefficient. Then these weights are normalized to reflect effort differences of regulating distinct genes. By appropriately choosing the parameters of the power law, we constructe a 0-1 integer programming problem. By solving this problem, direct regulation genes for an arbitrary gene can be estimated. And, the normalized weight of a gene is modified, on the basis of the estimation results about the existence of direct regulations to it. These normalized and modified weights are used in queuing the possibility of the existence of a corresponding direct regulation. Computation results with the DREAM4 In Silico Size 100 Multifactorial subchallenge show that estimation performances of the suggested algorithm can even outperform the best team. Using the real data provided by the DREAM5 Network Inference Challenge, estimation performances can be ranked third. Furthermore, the high precision of the obtained most reliable predictions shows the suggested algorithm may be helpful in guiding biological experiment designs.
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Samadian, Soroush; Bruce, Jeff P; Pugh, Trevor J
2018-03-01
Somatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.
Calculation of odour emissions from aircraft engines at Copenhagen Airport.
Winther, Morten; Kousgaard, Uffe; Oxbøl, Arne
2006-07-31
In a new approach the odour emissions from aircraft engines at Copenhagen Airport are calculated using actual fuel flow and emission measurements (one main engine and one APU: Auxiliary Power Unit), odour panel results, engine specific data and aircraft operational data for seven busy days. The calculation principle assumes a linear relation between odour and HC emissions. Using a digitalisation of the aircraft movements in the airport area, the results are depicted on grid maps, clearly reflecting aircraft operational statistics as single flights or total activity during a whole day. The results clearly reflect the short-term temporal fluctuations of the emissions of odour (and exhaust gases). Aircraft operating at low engine thrust (taxiing, queuing and landing) have a total odour emission share of almost 98%, whereas the shares for the take off/climb out phases (2%) and APU usage (0.5%) are only marginal. In most hours of the day, the largest odour emissions occur, when the total amount of fuel burned during idle is high. However, significantly higher HC emissions for one specific engine cause considerable amounts of odour emissions during limited time periods. The experimentally derived odour emission factor of 57 OU/mg HC is within the range of 23 and 110 OU/mg HC used in other airport odour studies. The distribution of odour emission results between aircraft operational phases also correspond very well with the results for these other studies. The present study uses measurement data for a representative engine. However, the uncertainties become large when the experimental data is used to estimate the odour emissions for all aircraft engines. More experimental data is needed to increase inventory accuracy, and in terms of completeness it is recommended to make odour emission estimates also for engine start and the fuelling of aircraft at Copenhagen Airport in the future.
Data Quality Verification at STScI - Automated Assessment and Your Data
NASA Astrophysics Data System (ADS)
Dempsey, R.; Swade, D.; Scott, J.; Hamilton, F.; Holm, A.
1996-12-01
As satellite based observatories improve their ability to deliver wider varieties and more complex types of scientific data, so to does the process of analyzing and reducing these data. It becomes correspondingly imperative that Guest Observers or Archival Researchers have access to an accurate, consistent, and easily understandable summary of the quality of their data. Previously, at the STScI, an astronomer would display and examine the quality and scientific usefulness of every single observation obtained with HST. Recently, this process has undergone a major reorganization at the Institute. A major part of the new process is that the majority of data are assessed automatically with little or no human intervention. As part of routine processing in the OSS--PODPS Unified System (OPUS), the Observatory Monitoring System (OMS) observation logs, the science processing trailer file (also known as the TRL file), and the science data headers are inspected by an automated tool, AUTO_DQ. AUTO_DQ then determines if any anomalous events occurred during the observation or through processing and calibration of the data that affects the procedural quality of the data. The results are placed directly into the Procedural Data Quality (PDQ) file as a string of predefined data quality keywords and comments. These in turn are used by the Contact Scientist (CS) to check the scientific usefulness of the observations. In this manner, the telemetry stream is checked for known problems such as losses of lock, re-centerings, or degraded guiding, for example, while missing data or calibration errors are also easily flagged. If the problem is serious, the data are then queued for manual inspection by an astronomer. The success of every target acquisition is verified manually. If serious failures are confirmed, the PI and the scheduling staff are notified so that options concerning rescheduling the observations can be explored.
Haghighinejad, Hourvash Akbari; Kharazmi, Erfan; Hatam, Nahid; Yousefi, Sedigheh; Hesami, Seyed Ali; Danaei, Mina; Askarian, Mehrdad
2016-01-01
Background: Hospital emergencies have an essential role in health care systems. In the last decade, developed countries have paid great attention to overcrowding crisis in emergency departments. Simulation analysis of complex models for which conditions will change over time is much more effective than analytical solutions and emergency department (ED) is one of the most complex models for analysis. This study aimed to determine the number of patients who are waiting and waiting time in emergency department services in an Iranian hospital ED and to propose scenarios to reduce its queue and waiting time. Methods: This is a cross-sectional study in which simulation software (Arena, version 14) was used. The input information was extracted from the hospital database as well as through sampling. The objective was to evaluate the response variables of waiting time, number waiting and utilization of each server and test the three scenarios to improve them. Results: Running the models for 30 days revealed that a total of 4088 patients left the ED after being served and 1238 patients waited in the queue for admission in the ED bed area at end of the run (actually these patients received services out of their defined capacity). The first scenario result in the number of beds had to be increased from 81 to179 in order that the number waiting of the “bed area” server become almost zero. The second scenario which attempted to limit hospitalization time in the ED bed area to the third quartile of the serving time distribution could decrease the number waiting to 586 patients. Conclusion: Doubling the bed capacity in the emergency department and consequently other resources and capacity appropriately can solve the problem. This includes bed capacity requirement for both critically ill and less critically ill patients. Classification of ED internal sections based on severity of illness instead of medical specialty is another solution. PMID:26793727
Protogyny in a tropical damselfish: females queue for future benefit
2016-01-01
Membership of the group is a balance between the benefits associated with group living and the cost of socially constrained growth and breeding opportunities, but the costs and benefits are seldom examined. The goal of the present study was to explore the trade-offs associated with group living for a sex-changing, potentially protogynous coral reef fish, the Ambon damselfish, Pomacentrus amboinensis. Extensive sampling showed that the species exhibits resource defence polygyny, where dominant males guard a nest site that is visited by females. P. amboinensis have a longevity of about 6.5 years on the northern Great Barrier Reef. While the species can change sex consistent with being a protogynous hermaphrodite, it is unclear the extent to which the species uses this capability. Social groups are comprised of one reproductive male, 1–7 females and a number of juveniles. Females live in a linear dominance hierarchy, with the male being more aggressive to the beta-female than the alpha-female, who exhibits lower levels of ovarian cortisol. Surveys and a tagging study indicated that groups were stable for at least three months. A passive integrated transponder tag study showed that males spawn with females from their own group, but also females from neighbouring groups. In situ behavioural observations found that alpha-females have priority of access to the nest site that the male guarded, and access to higher quality foraging areas. Male removal studies suggest that the alpha-females can change sex to take over from the male when the position becomes available. Examination of otolith microstructure showed that those individuals which change sex to males have different embryonic characteristics at hatching, suggesting that success may involve a component that is parentally endowed. The relative importance of parental effects and social organisation in affecting the importance of female queuing is yet to be studied, but will likely depend on the strength of social control by the dominant members of the group. PMID:27413641
NASA Astrophysics Data System (ADS)
Wei, Pei; Gu, Rentao; Ji, Yuefeng
2014-06-01
As an innovative and promising technology, network coding has been introduced to passive optical networks (PON) in recent years to support inter optical network unit (ONU) communication, yet the signaling process and dynamic bandwidth allocation (DBA) in PON with network coding (NC-PON) still need further study. Thus, we propose a joint signaling and DBA scheme for efficiently supporting differentiated services of inter ONU communication in NC-PON. In the proposed joint scheme, the signaling process lays the foundation to fulfill network coding in PON, and it can not only avoid the potential threat to downstream security in previous schemes but also be suitable for the proposed hybrid dynamic bandwidth allocation (HDBA) scheme. In HDBA, a DBA cycle is divided into two sub-cycles for applying different coding, scheduling and bandwidth allocation strategies to differentiated classes of services. Besides, as network traffic load varies, the entire upstream transmission window for all REPORT messages slides accordingly, leaving the transmission time of one or two sub-cycles to overlap with the bandwidth allocation calculation time at the optical line terminal (the OLT), so that the upstream idle time can be efficiently eliminated. Performance evaluation results validate that compared with the existing two DBA algorithms deployed in NC-PON, HDBA demonstrates the best quality of service (QoS) support in terms of delay for all classes of services, especially guarantees the end-to-end delay bound of high class services. Specifically, HDBA can eliminate queuing delay and scheduling delay of high class services, reduce those of lower class services by at least 20%, and reduce the average end-to-end delay of all services over 50%. Moreover, HDBA also achieves the maximum delay fairness between coded and uncoded lower class services, and medium delay fairness for high class services.
Automation of the Lowell Observatory 0.8-m Telescope
NASA Astrophysics Data System (ADS)
Buie, M. W.
2001-11-01
In the past year I have converted the Lowell Observatory 0.8-m telescope from a classically scheduled and operated telescope to an automated facility. The new setup uses an existing CCD camera and the existing telescope control system. The key steps in the conversion were writing a new CCD control and data acquisition module plus writing communication and queue control software. The previous CCD control program was written for DOS and much of the code was reused for this project. The entire control system runs under Linux and consists of four daemons: MOVE, PCCD, CMDR, and PCTL. The MOVE daemon is a process that communciates with the telescope control system via an RS232 port, keeping track of its state and forwarding commands from other processes to the telescope. The PCCD daemon controls the CCD camera and collects data. The CMDR daemon maintains a FIFO queue of commands to be executed during the night. The PCTL daemon receives notification from any other deamon of execution failures and sends an error code to the on-duty observer via a numeric pager. This system runs through the night much as you would traditionally operate a telescope. However, this system permits queuing up all the commands for a night and they execute one after another in sequence. Additional commands are needed to replace the normal human interaction during observing (ie., target acquisition, field registration, focusing). Also, numerous temporal synchronization commands are required so that observations happen at the right time. The system was used for this year's photometric monitoring of Pluto and Triton and is in general use for 2/3 of time on the telescope. Pluto observations were collected on 30 nights out of a potential pool of 90 nights. Detailed system design and capabilites plus sample observations will be presented. Also, a live demonstration will be provided if the weather is good. This work was supported by NASA Grant NAG5-4210 and the NSF REU Program grant to NAU.
Pennay, Amy; Miller, Peter; Busija, Lucy; Jenkinson, Rebecca; Droste, Nicolas; Quinn, Brendan; Jones, Sandra C; Lubman, Dan I
2015-02-01
We tested whether patrons of the night-time economy who had co-consumed energy drinks or illicit stimulants with alcohol had higher blood alcohol concentration (BAC) levels than patrons who had consumed only alcohol. Street intercept surveys (n = 4227) were undertaken between 9 p.m. and 5 a.m. over a period of 7 months. Interviews were undertaken with patrons walking through entertainment precincts, queuing to enter venues or exiting venues in five Australian cities. The response rate was 92.1%; more than half the study sample was male (60.2%) and the median age was 23 years (range 18-72). Data were collected on demographics, length of drinking session, venue types visited, types and quantity of alcohol consumed and other substance use. A BAC reading was recorded and a subsample of participants was tested for other drug use. Compared with the total sample (0.068%), illicit stimulant consumers (0.080%; P = 0.004) and energy drink consumers (0.074%; P < 0.001) had a significantly higher median BAC reading, and were more likely to engage in pre-drinking (65.6, 82.1 and 77.6%, respectively, P < 0.001) and longer drinking sessions (4, 5 and 4.5 hours, respectively, P < 0.001). However, stimulant use was not associated independently with higher BAC in the final multivariable model (illicit stimulants P = 0.198; energy drinks P = 0.112). Interaction analyses showed that stimulant users had a higher BAC in the initial stages of the drinking session, but not after 4-6 hours. While stimulant use does not predict BAC in and of itself, stimulants users are more likely to engage in prolonged sessions of heavy alcohol consumption and a range of risk-taking behaviours on a night out, which may explain higher levels of BAC among stimulants users, at least in the initial stages of the drinking session. © 2014 Society for the Study of Addiction.
Storey, John Morse; Curran, Scott; Dempsey, Adam B.; ...
2014-12-25
Reactivity controlled compression ignition (RCCI) has been shown in single- and multi-cylinder engine research to achieve high thermal efficiencies with ultra-low NO X and soot emissions. The nature of the particulate matter (PM) produced by RCCI operation has been shown in recent research to be different than that of conventional diesel combustion and even diesel low-temperature combustion. Previous research has shown that the PM from RCCI operation contains a large amount of organic material that is volatile and semi-volatile. However, it is unclear if the organic compounds are stemming from fuel or lubricant oil. The PM emissions from dual-fuel RCCImore » were investigated in this study using two engine platforms, with an emphasis on the potential contribution of lubricant. Both engine platforms used the same base General Motors (GM) 1.9-L diesel engine geometry. The first study was conducted on a single-cylinder research engine with primary reference fuels (PRFs), n-heptane, and iso-octane. The second study was conducted on a four-cylinder GM 1.9-L ZDTH engine which was modified with a port fuel injection (PFI) system while maintaining the stock direct injection fuel system. Multi-cylinder RCCI experiments were run with PFI gasoline and direct injection of 2-ethylhexyl nitrate (EHN) mixed with gasoline at 5 % EHN by volume. In addition, comparison cases of conventional diesel combustion (CDC) were performed. Particulate size distributions were measured, and PM filter samples were collected for analysis of lube oil components. Triplicate PM filter samples (i.e., three individual filter samples) for both gas chromatography-mass spectroscopy (GC-MS; organic) analysis and X-ray fluorescence (XRF; metals) were obtained at each operating point and queued for analysis of both organic species and lubricant metals. Here, the results give a clear indication that lubricants do not contribute significantly to the formation of RCCI PM.« less
NASA Astrophysics Data System (ADS)
Neakrase, Lynn; Hornung, Danae; Sweebe, Kathrine; Huber, Lyle; Chanover, Nancy J.; Stevenson, Zena; Berdis, Jodi; Johnson, Joni J.; Beebe, Reta F.
2017-10-01
The Research and Analysis programs within NASA’s Planetary Science Division now require archiving of resultant data with the Planetary Data System (PDS) or an equivalent archive. The PDS Atmospheres Node is developing an online environment for assisting data providers with this task. The Educational Labeling System for Atmospheres (ELSA) is being designed with Django/Python coding to provide an easier environment for facilitating not only communication with the PDS node, but also streamlining the process of learning, developing, submitting, and reviewing archive bundles under the new PDS4 archiving standard. Under the PDS4 standard, data are archived in bundles, collections, and basic products that form an organizational hierarchy of interconnected labels that describe the data and relationships between the data and its documentation. PDS4 labels are implemented using Extensible Markup Language (XML), which is an international standard for managing metadata. Potential data providers entering the ELSA environment can learn more about PDS4, plan and develop label templates, and build their archive bundles. ELSA provides an interface to tailor label templates aiding in the creation of required internal Logical Identifiers (URN - Uniform Resource Names) and Context References (missions, instruments, targets, facilities, etc.). The underlying structure of ELSA uses Django/Python code that make maintaining and updating the interface easy to do for our undergraduate/graduate students. The ELSA environment will soon provide an interface for using the tailored templates in a pipeline to produce entire collections of labeled products, essentially building the user’s archive bundle. Once the pieces of the archive bundle are assembled, ELSA provides options for queuing the completed bundle for peer review. The peer review process has also been streamlined for online access and tracking to help make the archiving process with PDS as transparent as possible. We discuss the current status of ELSA and provide examples of its implementation.
Towards a unified theory of neocortex: laminar cortical circuits for vision and cognition.
Grossberg, Stephen
2007-01-01
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of preattentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Gokhale, Sharad; Raokhande, Namita
2008-05-01
There are several models that can be used to evaluate roadside air quality. The comparison of the operational performance of different models pertinent to local conditions is desirable so that the model that performs best can be identified. Three air quality models, namely the 'modified General Finite Line Source Model' (M-GFLSM) of particulates, the 'California Line Source' (CALINE3) model, and the 'California Line Source for Queuing & Hot Spot Calculations' (CAL3QHC) model have been identified for evaluating the air quality at one of the busiest traffic intersections in the city of Guwahati. These models have been evaluated statistically with the vehicle-derived airborne particulate mass emissions in two sizes, i.e. PM10 and PM2.5, the prevailing meteorology and the temporal distribution of the measured daily average PM10 and PM2.5 concentrations in wintertime. The study has shown that the CAL3QHC model would make better predictions compared to other models for varied meteorology and traffic conditions. The detailed study reveals that the agreements between the measured and the modeled PM10 and PM2.5 concentrations have been reasonably good for CALINE3 and CAL3QHC models. Further detailed analysis shows that the CAL3QHC model performed well compared to the CALINE3. The monthly performance measures have also led to the similar results. These two models have also outperformed for a class of wind speed velocities except for low winds (<1 m s(-1)), for which, the M-GFLSM model has shown the tendency of better performance for PM10. Nevertheless, the CAL3QHC model has outperformed for both the particulate sizes and for all the wind classes, which therefore can be optional for air quality assessment at urban traffic intersections.
NASA Astrophysics Data System (ADS)
Magee, Jeff; Moffett, Jonathan
1996-06-01
Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.
Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.
2014-12-01
We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to drive hydrologic models, we have developed services to downscale and re-grid nationally available climate analysis data from systems such as NLDAS and MERRA. These cases serve as examples for how this approach can be extended to other models to enhance the use of HPC for hydrologic modeling.
Lessons from life: Learning from exhibits, animals and interaction in a museum
NASA Astrophysics Data System (ADS)
Goldowsky, Alexander Noah
This study examines the effect of interaction on visitor behavior at a public aquarium, experimentally comparing one exhibit under interactive and noninteractive conditions. A quantitative analysis showed that the time visitor groups spent in the study area significantly increased in the interactive condition (median 73 vs. 32 seconds). Further, this effect extended only to those groups within the interactive condition in which at least one member operated the exhibit (median 102 vs. 36 seconds). Both median times and survival curves are compared, and the analysis controlled for group size, age and sex ratios, visitor density, queuing time, and animal activity. Qualitative analyses focused on visitors' spontaneous conversation at the exhibit. Interactive visitors were found to engage in more in-depth exploration, including conducting informal experiments. The amount of discussion was found to correlate with stay time (r = 0.47). Visitor discussion centered on the exhibit, with frequent observations of penguin behavior. Greater enthusiasm was observed for interactive visitors, and coding showed interactive visitors laughed more frequently, and were significantly more likely to speculate on the penguins' reactions and motivations for behaviors. The experimental setup included a control condition consisting of a typical aquarium exhibit, including live penguins, naturalistic habitat, and graphics. The interactive condition added a device designed to mediate a two-way interaction between the visitors and penguins: visitors moved a light beam across the bottom of the pool. The penguins, intern, chased the light. This exhibit was designed both to benefit visitors and to serve as behavioral enrichment for the penguins. A third condition employed an automatically moving light, which elicited similar penguin behaviors, but without allowing visitor interaction. Videotaped data was analyzed for 301 visitor groups (756 individuals). A supplemental study employed video recall interviews. The study concludes that interaction is fundamental to the way in which humans investigate their world, and should play a major role in shaping the educational design of zoo and aquarium exhibits. Interactivity can encourage investigation and experimentation with phenomena, increase exhibit feedback, enhance the psychological dimensions of choice and control, and support visitors' desire for relationships with animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, J; Coss, D; McMurry, J
Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1,more » 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.« less
Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study
NASA Astrophysics Data System (ADS)
Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar
2017-06-01
The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S' should factor in the probability of waiting time in the queue.
SkyProbeBV: dual-color absolute sky transparency monitor to optimize science operations
NASA Astrophysics Data System (ADS)
Cuillandre, Jean-Charles; Magnier, Eugene; Sabin, Dan; Mahoney, Billy
2008-07-01
Mauna Kea (4200 m elevation, Hawaii) is known for its pristine seeing conditions, but sky transparency can be an issue for science operations: 25% of the nights are not photometric, a cloud coverage mostly due to high-altitude thin cirrus. The Canada-France-Hawaii Telescope (CFHT) is upgrading its real-time sky transparency monitor in the optical domain (V-band) into a dual-color system by adding a B-band channel and redesigning the entire optical and mechanical assembly. Since 2000, the original single-channel SkyProbe has gathered one exposure every minute during each observing night using a small CCD camera with a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (30 seconds) to capture at least 100 stars of Hipparcos' Tychos catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). A key advantage of SkyProbe over direct thermal infrared imaging detection of clouds, is that it allows an accurate absolute measurement, within 5%, of the true atmospheric absorption by clouds affecting the data being gathered by the telescope's main science instrument. This system has proven crucial for decision making in the CFHT queued service observing (QSO), representing today 95% of the telescope time: science exposures taken in non-photometric conditions are automatically registered for being re-observed later on (at 1/10th of the original exposure time per pointing in the observed filters) to ensure a proper final absolute photometric calibration. If the absorption is too high, exposures can be repeated, or the observing can be done for a lower ranked science program. The new dual color system (simultaneous B & V bands) will allow a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinner cirrus (absorption down to 0.02 mag., i.e. 2%). SkyProbe is operated within the Elixir pipeline, a collection of tools used for handling the CFHT CCD mosaics (CFH12K and MegaCam), from data pre-processing to astrometric and photometric calibration.
Stochastic switching in biology: from genotype to phenotype
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2017-03-01
There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.
Optimal Operation of Data Centers in Future Smart Grid
NASA Astrophysics Data System (ADS)
Ghamkhari, Seyed Mahdi
The emergence of cloud computing has established a growing trend towards building massive, energy-hungry, and geographically distributed data centers. Due to their enormous energy consumption, data centers are expected to have major impact on the electric grid by significantly increasing the load at locations where they are built. However, data centers also provide opportunities to help the grid with respect to robustness and load balancing. For instance, as data centers are major and yet flexible electric loads, they can be proper candidates to offer ancillary services, such as voluntary load reduction, to the smart grid. Also, data centers may better stabilize the price of energy in the electricity markets, and at the same time reduce their electricity cost by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. In this thesis, such potentials are investigated within an analytical profit maximization framework by developing new mathematical models based on queuing theory. The proposed models capture the trade-off between quality-of-service and power consumption in data centers. They are not only accurate, but also they posses convexity characteristics that facilitate joint optimization of data centers' service rates, demand levels and demand bids to different electricity markets. The analysis is further expanded to also develop a unified comprehensive energy portfolio optimization for data centers in the future smart grid. Specifically, it is shown how utilizing one energy option may affect selecting other energy options that are available to a data center. For example, we will show that the use of on-site storage and the deployment of geographical workload distribution can particularly help data centers in utilizing high-risk energy options such as renewable generation. The analytical approach in this thesis takes into account service-level-agreements, risk management constraints, and also the statistical characteristics of the Internet workload and the electricity prices. Using empirical data, the performance of our proposed profit maximization models for data centers are evaluated, and the capability of data centers to benefit from participation in a variety of Demand Response programs is assessed.
Air Pollution Exposure in Relation to the Commute to School: A Bradford UK Case Study
Dirks, Kim N.; Wang, Judith Y. T.; Khan, Amirul; Rushton, Christopher
2016-01-01
Walking School Buses (WSBs) provide a safe alternative to being driven to school. Children benefit from the contribution the exercise provides towards their daily exercise target, it gives children practical experience with respect to road safety and it helps to relieve traffic congestion around the entrance to their school. Walking routes are designed largely based in road safety considerations, catchment need and the availability of parent support. However, little attention is given to the air pollution exposure experienced by children during their journey to school, despite the commuting microenvironment being an important contributor to a child’s daily air pollution exposure. This study aims to quantify the air pollution exposure experienced by children walking to school and those being driven by car. A school was chosen in Bradford, UK. Three adult participants carried out the journey to and from school, each carrying a P-Trak ultrafine particle (UFP) count monitor. One participant travelled the journey to school by car while the other two walked, each on opposite sides of the road for the majority of the journey. Data collection was carried out over a period of two weeks, for a total of five journeys to school in the morning and five on the way home at the end of the school day. Results of the study suggest that car commuters experience lower levels of air pollution dose due to lower exposure and reduced commute times. The largest reductions in exposure for pedestrians can be achieved by avoiding close proximity to traffic queuing up at intersections, and, where possible, walking on the side of the road opposite the traffic, especially during the morning commuting period. Major intersections should also be avoided as they were associated with peak exposures. Steps to ensure that the phasing of lights is optimised to minimise pedestrian waiting time would also help reduce exposure. If possible, busy roads should be avoided altogether. By the careful design of WSB routes, taking into account air pollution, children will be able to experience the benefits that walking to school brings while minimizing their air pollution exposure during their commute to and from school. PMID:27801878
Air Pollution Exposure in Relation to the Commute to School: A Bradford UK Case Study.
Dirks, Kim N; Wang, Judith Y T; Khan, Amirul; Rushton, Christopher
2016-10-29
Walking School Buses (WSBs) provide a safe alternative to being driven to school. Children benefit from the contribution the exercise provides towards their daily exercise target, it gives children practical experience with respect to road safety and it helps to relieve traffic congestion around the entrance to their school. Walking routes are designed largely based in road safety considerations, catchment need and the availability of parent support. However, little attention is given to the air pollution exposure experienced by children during their journey to school, despite the commuting microenvironment being an important contributor to a child's daily air pollution exposure. This study aims to quantify the air pollution exposure experienced by children walking to school and those being driven by car. A school was chosen in Bradford, UK. Three adult participants carried out the journey to and from school, each carrying a P-Trak ultrafine particle (UFP) count monitor. One participant travelled the journey to school by car while the other two walked, each on opposite sides of the road for the majority of the journey. Data collection was carried out over a period of two weeks, for a total of five journeys to school in the morning and five on the way home at the end of the school day. Results of the study suggest that car commuters experience lower levels of air pollution dose due to lower exposure and reduced commute times. The largest reductions in exposure for pedestrians can be achieved by avoiding close proximity to traffic queuing up at intersections, and, where possible, walking on the side of the road opposite the traffic, especially during the morning commuting period. Major intersections should also be avoided as they were associated with peak exposures. Steps to ensure that the phasing of lights is optimised to minimise pedestrian waiting time would also help reduce exposure. If possible, busy roads should be avoided altogether. By the careful design of WSB routes, taking into account air pollution, children will be able to experience the benefits that walking to school brings while minimizing their air pollution exposure during their commute to and from school.
Use of CCSDS and OSI Protocols on the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Chirieleison, Don
1996-01-01
Although ACTS (Advanced Communications Technology Satellite) provides an almost error-free channel during much of the day and under most conditions, there are times when it is not suitable for reliably error-free data communications when operating in the uncoded mode. Because coded operation is not always available to every earth station, measures must be taken in the end system to maintain adequate throughput when transferring data under adverse conditions. The most effective approach that we tested to improve performance was the addition of an 'outer' Reed-Solomon code through use of CCSDS (Consultative Committee for Space Data Systems) GOS 2 (a forward error correcting code). This addition can benefit all users of an ACTS channel including those applications that do not require totally reliable transport, but it is somewhat expensive because additional hardware is needed. Although we could not characterize the link noise statistically (it appeared to resemble uncorrelated white noise, the type that block codes are least effective in correcting), we did find that CCSDS GOS 2 gave an essentially error-free link at BER's (bit error rate) as high as 6x10(exp -4). For users that demand reliable transport, an ARQ (Automatic Repeat Queuing) protocol such as TCP (Transmission Control Protocol) or TP4 (Transport Protocol, Class 4) will probably be used. In this category, it comes as no surprise that the best choice of the protocol suites tested over ACTS was TP4 using CCSDS GOS 2. TP4 behaves very well over an error-free link which GOS 2 provides up to a point. Without forward error correction, however, TP4 service begins to degrade in the 10(exp -7)-10(exp -6) range and by 4x10(exp -6), it barely gives any throughput at all. If Congestion Avoidance is used in TP4, the degradation is even more pronounced. Fortunately, as demonstrated here, this effect can be more than compensated for by choosing the Selective Acknowledgment option. In fact, this option can enable TP4 to deliver some throughput at error rates as high as 10(exp -5).
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Beekwilder, N.; Chan, S.; Cheah, Y. W.; Chu, H.; Dengel, S.; O'Brien, F.; Pastorello, G.; Sandesh, M.; Torn, M. S.; Agarwal, D.
2017-12-01
AmeriFlux is a network of scientists who independently collect eddy covariance and related environmental observations at over 250 locations across the Americas. As part of the AmeriFlux Management Project, the AmeriFlux Data Team manages standardization, collection, quality assurance / quality control (QA/QC), and distribution of data submitted by network members. To generate data products that are timely, QA/QC'd, and repeatable, and have traceable provenance, we developed a semi-automated data processing pipeline. The new pipeline consists of semi-automated format and data QA/QC checks. Results are communicated via on-line reports as well as an issue-tracking system. Data processing time has been reduced from 2-3 days to a few hours of manual review time, resulting in faster data availability from the time of data submission. The pipeline is scalable to the network level and has the following key features. (1) On-line results of the format QA/QC checks are available immediately for data provider review. This enables data providers to correct and resubmit data quickly. (2) The format QA/QC assessment includes an automated attempt to fix minor format errors. Data submissions that are formatted in the new AmeriFlux FP-In standard can be queued for the data QA/QC assessment, often with minimal delay. (3) Automated data QA/QC checks identify and communicate potentially erroneous data via online, graphical quick views that highlight observations with unexpected values, incorrect units, time drifts, invalid multivariate correlations, and/or radiation shadows. (4) Progress through the pipeline is integrated with an issue-tracking system that facilitates communications between data providers and the data processing team in an organized and searchable fashion. Through development of these and other features of the pipeline, we present solutions to challenges that include optimizing automated with manual processing, bridging legacy data management infrastructure with various software tools, and working across interdisciplinary and international science cultures. Additionally, we discuss results from community member feedback that helped refine QA/QC communications for efficient data submission and revision.
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
NASA Astrophysics Data System (ADS)
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a convex function: the profit intensity reaches its maximum when the probability of transaction is given by the golden ratio rule (\\sqrt {5}-1)/{2} . This condition sets a sharp criterion of validity of the model and can be tested with real market data.
Kazerounian, Sohrob; Grossberg, Stephen
2014-01-01
How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categories, or list chunks, that become selectively tuned to different temporal sequences of items in lists of variable length as they are stored in working memory, and how does this learning process occur in real time? The present article introduces a neural model that simulates learning of such list chunks. In this model, sequences of items are temporarily stored in an Item-and-Order, or competitive queuing, working memory before learning categorizes them using a categorization network, called a Masking Field, which is a self-similar, multiple-scale, recurrent on-center off-surround network that can weigh the evidence for variable-length sequences of items as they are stored in the working memory through time. A Masking Field hereby activates the learned list chunks that represent the most predictive item groupings at any time, while suppressing less predictive chunks. In a network with a given number of input items, all possible ordered sets of these item sequences, up to a fixed length, can be learned with unsupervised or supervised learning. The self-similar multiple-scale properties of Masking Fields interacting with an Item-and-Order working memory provide a natural explanation of George Miller's Magical Number Seven and Nelson Cowan's Magical Number Four. The article explains why linguistic, spatial, and action event sequences may all be stored by Item-and-Order working memories that obey similar design principles, and thus how the current results may apply across modalities. Item-and-Order properties may readily be extended to Item-Order-Rank working memories in which the same item can be stored in multiple list positions, or ranks, as in the list ABADBD. Comparisons with other models, including TRACE, MERGE, and TISK, are made. PMID:25339918
Atela, Martin; Bakibinga, Pauline; Ettarh, Remare; Kyobutungi, Catherine; Cohn, Simon
2015-12-04
Enhancing accountability in health systems is increasingly emphasised as crucial for improving the nature and quality of health service delivery worldwide and particularly in developing countries. Accountability mechanisms include, among others, health facilities committees, suggestion boxes, facility and patient charters. However, there is a dearth of information regarding the nature of and factors that influence the performance of accountability mechanisms, especially in developing countries. We examine community members' experiences of one such accountability mechanism, the health facility charter in Kericho District, Kenya. A household survey was conducted in 2011 among 1,024 respondents (36% male, 64% female) aged 17 years and above stratified by health facility catchment area, situated in a division in Kericho District. In addition, sixteen focus group discussions were conducted with health facility users in the four health facility catchment areas. Quantitative data were analysed through frequency distributions and cross-tabulations. Qualitative data were transcribed and analysed using a thematic approach. The majority (65%) of household survey respondents had seen their local facility service charter, 84% of whom had read the information on the charter. Of these, 83% found the charter to be useful or very useful. According to the respondents, the charters provided useful information about the services offered and their costs, gave users a voice to curb potential overcharging and helped users plan their medical expenses before receiving the service. However, community members cited several challenges with using the charters: non-adherence to charter provisions by health workers; illegibility and language issues; lack of expenditure records; lack of time to read and understand them, often due to pressures around queuing; and socio-cultural limitations. Findings from this study suggest that improving the compliance of health facilities in districts across Kenya with regard to the implementation of the facility service charter is critical for accountability and community satisfaction with service delivery. To improve the compliance of health facilities, attention needs to be focused on mechanisms that help enforce official guidelines, address capacity gaps, and enhance public awareness of the charters and their use.
Alhassan, Robert Kaba; Nketiah-Amponsah, Edward; Arhinful, Daniel Kojo
2016-12-01
Nearly four decades after the Alma-Ata declaration of 1978 on the need for active client/community participation in healthcare, not much has been achieved in this regard particularly in resource constrained countries like Ghana, where over 70 % of communities in rural areas access basic healthcare from primary health facilities. Systematic Community Engagement (SCE) in healthcare quality assessment remains a grey area in many health systems in Africa, albeit the increasing importance in promoting universal access to quality basic healthcare services. Design and implement SCE interventions that involve existing community groups engaged in healthcare quality assessment in 32 intervention primary health facilities. The SCE interventions form part of a four year randomized controlled trial (RCT) in the Greater Accra and Western regions of Ghana. Community groups (n = 52) were purposively recruited and engaged to assess non-technical components of healthcare quality, recommend quality improvement plans and reward best performing facilities. The interventions comprised of five cyclical implementation steps executed for nearly a year. Wilcoxon sign rank test was used to ascertain differences in group perceptions of service quality during the first and second assessments, and ordered logistic regression analysis performed to determine factors associated with groups' perception of healthcare quality. Healthcare quality was perceived to be lowest in non-technical areas such as: information provision to clients, directional signs in clinics, drug availability, fairness in queuing, waiting times, and information provision on use of suggestion boxes and feedback on clients' complaints. Overall, services in private health facilities were perceived to be better than public facilities (p < 0.05). Community groups dominated by artisans and elderly members (60 + years) had better perspectives on healthcare quality than youthful groups (Coef. =1.78; 95 % CI = [-0.16 3.72]) and other categories of community groups (Coef. = 0.98; 95 % CI = [-0.10 2.06]). Non-technical components of healthcare quality remain critical to clients and communities served by primary healthcare providers. The SCE concept is a potential innovative and complementary quality improvement strategy that could help enhance client experiences, trust and confidence in healthcare providers. SCE interventions are more cost effective, community-focused and could easily be scaled-up and sustained by local health authorities.
Incorporating Brokers within Collaboration Environments
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; de Torcy, A.
2013-12-01
A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.
Optical burst switching for the next generation Optical Internet
NASA Astrophysics Data System (ADS)
Yoo, Myungsik
2000-11-01
In recent years, Internet Protocol (IP) over Wavelength Division Multiplexing (WDM) networks for the next generation Internet (or the so-called Optical Internet) have received enormous attention. There are two main drivers for an Optical Internet. One is the explosion of Internet traffic, which seems to keep growing exponentially. The other driver is the rapid advance in the WDM optical networking technology. In this study, key issues in the optical (WDM) layer will be investigated. As a novel switching paradigm for Optical Internet, Optical Burst Switching (OBS) is discussed. By leveraging the attractive properties of optical communications and at the same time, taking into account its limitations, OBS can combine the best of optical circuit-switching and packet/cell switching. The general concept of JET-based OBS protocol is described, including offset time and delayed reservation. In the next generation Optical Internet, one must address how to support Quality of Service (QoS) at the WDM layer since current IP provides only best effort services. The offset-time- based QoS scheme is proposed as a way of supporting QoS at the WDM layer. Unlike existing QoS schemes, offset- time-based QoS scheme does not mandate the use of buffer to differentiate services. For the bufferless WDM switch, the performance of offset- time-based QoS scheme is evaluated in term of blocking probability. In addition, the extra offset time required for class isolation is quantified and the theoretical bounds on blocking probability are analyzed. The offset-time-based scheme is applied to WDM switch with limited fiber delay line (FDL) buffer. We evaluate the effect of having a FDL buffer on the QoS performance of the offset-time-based scheme in terms of the loss probability and queuing delay of bursts. Finally, in order to dimension the network resources in Optical Internet backbone networks, the performance of the offset-time-based QoS scheme is evaluated for the multi-hop case. In particular, we consider very high performance Backbone Network Service (vBNS) backbone network. Various policies such as drop, retransmission, deflection routing and buffering are considered for performance evaluation. The performance results obtained under these policies are compared to decide the most efficient policy for the WDM backbone network.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
Issues in Energy Economics Led by Emerging Linkages between the Natural Gas and Power Sectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platt, Jeremy B.
2007-09-15
Fuel prices in 2006 continued at record levels, with uranium continuing upward unabated and coal, SO{sub 2} emission allowances, and natural gas all softening. This softening did not continue for natural gas, however, whose prices rose, fell and rose again, first following weather influences and, by the second quarter of 2007, continuing at high levels without any support from fundamentals. This article reviews these trends and describes the remarkable increases in fuel expenses for power generation. By the end of 2005, natural gas claimed 55% of annual power sector fuel expenses, even though it was used for only 19% ofmore » electric generation. Although natural gas is enormously important to the power sector, the sector also is an important driver of the natural gas market-growing to over 28% of the market even as total use has declined. The article proceeds to discuss globalization, natural gas price risk, and technology developments. Forces of globalization are poised to affect the energy markets in new ways-new in not being only about oil. Of particular interest in the growth of intermodal traffic and its a little-understood impacts on rail traffic patterns and transportation costs, and expected rapidly expanding LNG imports toward the end of the decade. Two aspects of natural gas price risk are discussed: how understanding the use of gas in the power sector helps define price ceilings and floors for natural gas, and how the recent increase in the natural gas production after years of record drilling could alter the supply-demand balance for the better. The article cautions, however, that escalation in natural gas finding and development costs is countering the more positive developments that emerged during 2006. Regarding technology, the exploitation of unconventional natural gas was one highlight. So too was the queuing up of coal-fired power plants for the post-2010 period, a phenomenon that has come under great pressure with many consequences including increased pressures in the natural gas market. The most significant illustration of these forces was the early 2007 suspension of development plans by a large power company, well before the Supreme Court's ruling on CO{sub 2} as a tailpipe pollutant and President Bush's call for global goals on CO{sub 2} emissions.« less
Couto, Maria T; Tillgren, Per; Söderbäck, Maja
2011-10-13
Workplace violence (WPV) is an occupational health hazard in both low and high income countries. To design WPV prevention programs, prior knowledge and understanding of conditions in the targeted population are essential. This study explores and describes the views of drivers and conductors on the causes of WPV and ways of preventing it in the road passenger transport sector in Maputo City, Mozambique. The design was qualitative. Participants were purposefully selected from among transport workers identified as victims of WPV in an earlier quantitative study, and with six or more years of experience in the transport sector. Data were collected in semi-structured interviews. Seven open questions covered individual views on causes of WPV and its prevention, based on the interviewees' experiences of violence while on duty. Thirty-two transport professionals were interviewed. The data were analyzed by means of qualitative content analysis. The triggers and causes of violence included fare evasion, disputes over revenue owing to owners, alcohol abuse, overcrowded vehicles, and unfair competition for passengers. Failures to meet passenger expectations, e.g. by-passing parts of a bus route or missing stops, were also important. There was disrespect on the part of transport workers, e.g. being rude to passengers and jumping of queues at taxi ranks, and there were also robberies. Proposals for prevention included: training for workers on conflict resolution, and for employers on passenger-transport administration; and, promoting learning among passengers and workers on how to behave when traveling collectively. Regarding control and supervision, there were expressed needs for the recording of mileage, and for the sanctioning of workers who transgress queuing rules at taxi ranks. The police or supervisors should prevent drunken passengers from getting into vehicles, and drivers should refuse to go to dangerous, secluded neighborhoods. Finally, there is a need for an institution to judge alleged cases of employees not handing over demanded revenues to their employer. The causes of WPV lie in problems regarding money, behavior, environment, organization and crime. Suggestions for prevention include education, control to avoid critical situations, and a judicial system to assess malpractices. Further research in the road passenger transport sector in Maputo City, Mozambique and similar settings is warranted.
NASA Technical Reports Server (NTRS)
Miller, M. Meghan
1998-01-01
Accomplishments: (1) Continues GPS monitoring of surface change during and following the fortuitous occurrence of the M(sub w) = 7.3 Landers earthquake in our network, in order to characterize earthquake dynamics and accelerated activity of related faults as far as 100's of kilometers along strike. (2) Integrates the geodetic constraints into consistent kinematic descriptions of the deformation field that can in turn be used to characterize the processes that drive geodynamics, including seismic cycle dynamics. In 1991, we installed and occupied a high precision GPS geodetic network to measure transform-related deformation that is partitioned from the Pacific - North America plate boundary northeastward through the Mojave Desert, via the Eastern California shear zone to the Walker Lane. The onset of the M(sub w) = 7.3 June 28, 1992, Landers, California, earthquake sequence within this network poses unique opportunities for continued monitoring of regional surface deformation related to the culmination of a major seismic cycle, characterization of the dynamic behavior of continental lithosphere during the seismic sequence, and post-seismic transient deformation. During the last year, we have reprocessed all three previous epochs for which JPL fiducial free point positioning products available and are queued for the remaining needed products, completed two field campaigns monitoring approx. 20 sites (October 1995 and September 1996), begun modeling by development of a finite element mesh based on network station locations, and developed manuscripts dealing with both the Landers-related transient deformation at the latitude of Lone Pine and the velocity field of the whole experiment. We are currently deploying a 1997 observation campaign (June 1997). We use GPS geodetic studies to characterize deformation in the Mojave Desert region and related structural domains to the north, and geophysical modeling of lithospheric behavior. The modeling is constrained by our existing and continued GPS measurements, which will provide much needed data on far-field strain accumulation across the region and on the deformational response of continental lithosphere during and following a large earthquake, forming the basis for kinematic and dynamic modeling of secular and seismic-cycle deformation. GPS geodesy affords both regional coverage and high precision that uniquely bear on these problems.
2011-01-01
Background Workplace violence (WPV) is an occupational health hazard in both low and high income countries. To design WPV prevention programs, prior knowledge and understanding of conditions in the targeted population are essential. This study explores and describes the views of drivers and conductors on the causes of WPV and ways of preventing it in the road passenger transport sector in Maputo City, Mozambique. Methods The design was qualitative. Participants were purposefully selected from among transport workers identified as victims of WPV in an earlier quantitative study, and with six or more years of experience in the transport sector. Data were collected in semi-structured interviews. Seven open questions covered individual views on causes of WPV and its prevention, based on the interviewees' experiences of violence while on duty. Thirty-two transport professionals were interviewed. The data were analyzed by means of qualitative content analysis. Results The triggers and causes of violence included fare evasion, disputes over revenue owing to owners, alcohol abuse, overcrowded vehicles, and unfair competition for passengers. Failures to meet passenger expectations, e.g. by-passing parts of a bus route or missing stops, were also important. There was disrespect on the part of transport workers, e.g. being rude to passengers and jumping of queues at taxi ranks, and there were also robberies. Proposals for prevention included: training for workers on conflict resolution, and for employers on passenger-transport administration; and, promoting learning among passengers and workers on how to behave when traveling collectively. Regarding control and supervision, there were expressed needs for the recording of mileage, and for the sanctioning of workers who transgress queuing rules at taxi ranks. The police or supervisors should prevent drunken passengers from getting into vehicles, and drivers should refuse to go to dangerous, secluded neighborhoods. Finally, there is a need for an institution to judge alleged cases of employees not handing over demanded revenues to their employer. Conclusions The causes of WPV lie in problems regarding money, behavior, environment, organization and crime. Suggestions for prevention include education, control to avoid critical situations, and a judicial system to assess malpractices. Further research in the road passenger transport sector in Maputo City, Mozambique and similar settings is warranted. PMID:21995594
Ethical issues in pediatric emergency mass critical care.
Antommaria, Armand H Matheny; Powell, Tia; Miller, Jennifer E; Christian, Michael D
2011-11-01
As a result of recent events, including natural disasters and pandemics, mass critical care planning has become a priority. In general, planning involves limiting the scope of disasters, increasing the supply of medical resources, and allocating scarce resources. Entities at varying levels have articulated ethical frameworks to inform policy development. In spite of this increased focus, children have received limited attention. Children require special attention because of their unique vulnerabilities and needs. In May 2008, the Task Force for Mass Critical Care published guidance on provision of mass critical care to adults. Acknowledging that the critical care needs of children during disasters were unaddressed by this effort, a 17-member Steering Committee, assembled by the Oak Ridge Institute for Science and Education with guidance from members of the American Academy of Pediatrics, convened in April 2009 to determine priority topic areas for pediatric emergency mass critical care recommendations.Steering Committee members established subgroups by topic area and performed literature reviews of MEDLINE and Ovid databases. Draft documents were subsequently developed and revised based on the feedback from the Task Force. The Pediatric Emergency Mass Critical Care Task Force, composed of 36 experts from diverse public health, medical, and disaster response fields, convened in Atlanta, GA, on March 29-30, 2010. This document reflects expert input from the Task Force in addition to the most current medical literature. The Ethics Subcommittee recommends that surge planning seek to provide resources for children in proportion to their percentage of the population or preferably, if data are available, the percentage of those affected by the disaster. Generally, scarce resources should be allocated on the basis of need, benefit, and the conservation of resources. Estimates of need, benefit, and resource utilization may be more subjective or objective. While the Subcommittee favors more objective methods, pediatrics lacks a simple, validated scoring system to predict benefit or resource utilization. The Subcommittee hesitantly recommends relying on expert opinion while pediatric triage tools are developed. If resources remain inadequate, they should then be allocated based on queuing or lottery. Choosing between these methods is based on ethical, psychological, and practical considerations upon which the Subcommittee could not reach consensus. The Subcommittee unanimously believes the proposal to favor individuals between 15 and 40 yrs of age is inappropriate. Other age-based criteria and criteria based on social role remain controversial. The Subcommittee recommends continued work to engage all stakeholders, especially the public, in deliberation about these issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-Ping; Chughtai, Aamer
2014-08-15
Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left andmore » right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. Conclusions: The authors’ MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.« less
Invited review: Animal-based indicators for on-farm welfare assessment for dairy goats.
Battini, M; Vieira, A; Barbieri, S; Ajuda, I; Stilwell, G; Mattiello, S
2014-11-01
This paper reviews animal-based welfare indicators to develop a valid, reliable, and feasible on-farm welfare assessment protocol for dairy goats. The indicators were considered in the light of the 4 accepted principles (good feeding, good housing, good health, appropriate behavior) subdivided into 12 criteria developed by the European Welfare Quality program. We will only examine the practical indicators to be used on-farm, excluding those requiring the use of specific instruments or laboratory analysis and those that are recorded at the slaughterhouse. Body condition score, hair coat condition, and queuing at the feed barrier or at the drinker seem the most promising indicators for the assessment of the "good feeding" principle. As to "good housing," some indicators were considered promising for assessing "comfort around resting" (e.g., resting in contact with a wall) or "thermal comfort" (e.g., panting score for the detection of heat stress and shivering score for the detection of cold stress). Several indicators related to "good health," such as lameness, claw overgrowth, presence of external abscesses, and hair coat condition, were identified. As to the "appropriate behavior" principle, different criteria have been identified: agonistic behavior is largely used as the "expression of social behavior" criterion, but it is often not feasible for on-farm assessment. Latency to first contact and the avoidance distance test can be used as criteria for assessing the quality of the human-animal relationship. Qualitative behavior assessment seems to be a promising indicator for addressing the "positive emotional state" criterion. Promising indicators were identified for most of the considered criteria; however, no valid indicator has been identified for "expression of other behaviors." Interobserver reliability has rarely been assessed and warrants further attention; in contrast, short-term intraobserver reliability is frequently assessed and some studies consider mid- and long-term reliability. The feasibility of most of the reviewed indicators in commercial farms still needs to be carefully evaluated, as several studies were performed under experimental conditions. Our review highlights some aspects of goat welfare that have been widely studied, but some indicators need to be investigated further and drafted before being included in a valid, reliable, and feasible welfare assessment protocol. The indicators selected and examined may be an invaluable starting point for the development of an on-farm welfare assessment protocol for dairy goats. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Rapid Corner Detection Using FPGAs
NASA Technical Reports Server (NTRS)
Morfopoulos, Arin C.; Metz, Brandon C.
2010-01-01
In order to perform precision landings for space missions, a control system must be accurate to within ten meters. Feature detection applied against images taken during descent and correlated against the provided base image is computationally expensive and requires tens of seconds of processing time to do just one image while the goal is to process multiple images per second. To solve this problem, this algorithm takes that processing load from the central processing unit (CPU) and gives it to a reconfigurable field programmable gate array (FPGA), which is able to compute data in parallel at very high clock speeds. The workload of the processor then becomes simpler; to read an image from a camera, it is transferred into the FPGA, and the results are read back from the FPGA. The Harris Corner Detector uses the determinant and trace to find a corner score, with each step of the computation occurring on independent clock cycles. Essentially, the image is converted into an x and y derivative map. Once three lines of pixel information have been queued up, valid pixel derivatives are clocked into the product and averaging phase of the pipeline. Each x and y derivative is squared against itself, as well as the product of the ix and iy derivative, and each value is stored in a WxN size buffer, where W represents the size of the integration window and N is the width of the image. In this particular case, a window size of 5 was chosen, and the image is 640 480. Over a WxN size window, an equidistance Gaussian is applied (to bring out the stronger corners), and then each value in the entire window is summed and stored. The required components of the equation are in place, and it is just a matter of taking the determinant and trace. It should be noted that the trace is being weighted by a constant k, a value that is found empirically to be within 0.04 to 0.15 (and in this implementation is 0.05). The constant k determines the number of corners available to be compared against a threshold sigma to mark a valid corner. After a fixed delay from when the first pixel is clocked in (to fill the pipeline), a score is achieved after each successive clock. This score corresponds with an (x,y) location within the image. If the score is higher than the predetermined threshold sigma, then a flag is set high and the location is recorded.
Distributed Energy Resources Customer Adoption Model - Graphical User Interface, Version 2.1.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewald, Friedrich; Stadler, Michael; Cardoso, Goncalo F
The DER-CAM Graphical User Interface has been redesigned to consist of a dynamic tree structure on the left side of the application window to allow users to quickly navigate between different data categories and views. Views can either be tables with model parameters and input data, the optimization results, or a graphical interface to draw circuit topology and visualize investment results. The model parameters and input data consist of tables where values are assigned to specific keys. The aggregation of all model parameters and input data amounts to the data required to build a DER-CAM model, and is passed tomore » the GAMS solver when users initiate the DER-CAM optimization process. Passing data to the GAMS solver relies on the use of a Java server that handles DER-CAM requests, queuing, and results delivery. This component of the DER-CAM GUI can be deployed either locally or remotely, and constitutes an intermediate step between the user data input and manipulation, and the execution of a DER-CAM optimization in the GAMS engine. The results view shows the results of the DER-CAM optimization and distinguishes between a single and a multi-objective process. The single optimization runs the DER-CAM optimization once and presents the results as a combination of summary charts and hourly dispatch profiles. The multi-objective optimization process consists of a sequence of runs initiated by the GUI, including: 1) CO2 minimization, 2) cost minimization, 3) a user defined number of points in-between objectives 1) and 2). The multi-objective results view includes both access to the detailed results of each point generated by the process as well as the generation of a Pareto Frontier graph to illustrate the trade-off between objectives. DER-CAM GUI 2.1.8 also introduces the ability to graphically generate circuit topologies, enabling support to DER-CAM 5.0.0. This feature consists of: 1) The drawing area, where users can manually create nodes and define their properties (e.g. point of common coupling, slack bus, load) and connect them through edges representing either power lines, transformers, or heat pipes, all with user defined characteristics (e.g., length, ampacity, inductance, or heat loss); 2) The tables, which display the user-defined topology in the final numerical form that will be passed to the DER-CAM optimization. Finally, the DER-CAM GUI is also deployed with a database schema that allows users to provide different energy load profiles, solar irradiance profiles, and tariff data, that can be stored locally and later used in any DER-CAM model. However, no real data will be delivered with this version.« less
JTS and its Application in Environmental Protection Applications
NASA Astrophysics Data System (ADS)
Atanassov, Emanouil; Gurov, Todor; Slavov, Dimitar; Ivanovska, Sofiya; Karaivanova, Aneta
2010-05-01
The environmental protection was identified as a domain of high interest for South East Europe, addressing practical problems related to security and quality of life. The gridification of the Bulgarian applications MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aims to develop an efficient Grid implementation of a sensitivity analysis of the Danish Eulerian Model), MSACM (Multi-Scale Atmospheric Composition Modeling) which aims to produce an integrated, multi-scale Balkan region oriented modelling system, able to interface the scales of the problem from emissions on the urban scale to their transport and transformation on the local and regional scales), MSERRHSA (Modeling System for Emergency Response to the Release of Harmful Substances in the Atmosphere) which aims to develop and deploy a modeling system for emergency response to the release of harmful substances in the atmosphere, targeted at the SEE and more specifically Balkan region) faces several challenges: These applications are resource intensive, in terms of both CPU utilization and data transfers and storage. The use of applications for operational purposes poses requirements for availability of resources, which are difficult to be met on a dynamically changing Grid environment. The validation of applications is resource intensive and time consuming. The successful resolution of these problems requires collaborative work and support from part of the infrastructure operators. However, the infrastructure operators are interested to avoid underutilization of resources. That is why we developed the Job Track Service and tested it during the development of the grid implementations of MCSAES, MSACM and MSERRHSA. The Job Track Service (JTS) is a grid middleware component which facilitates the provision of Quality of Service in grid infrastructures using gLite middleware like EGEE and SEEGRID. The service is based on messaging middleware and uses standart protocols like AMQP (Advanced Message Queuing Protocol) and XMPP (eXtensible Messaging and Presence Protocol) for real-time communication, while its security model is based on GSI authentication. It enables resource owners to provide the most popular types of QoS of execution to some of their users, using a standardized model. The first version of the service offered services to individual users. In this work we describe a new version of the Job Track service offering application specific functionality, geared towards the specific needs of the Environmental Modelling and Protection applications and oriented towards collaborative usage by groups and subgroups of users. We used the modular design of the JTS in order to implement plugins enabling smoother interaction of the users with the Grid environment. Our experience shows improved response times and decreased failure rate from the executions of the application. In this work we present such observations from the use of the South East European Grid infrastructure.
[Awareness of adult attention-deficit/hyperactivity disorder (ADHD) in Greece].
Pehlivanidis, A
2012-06-01
Attention-deficit/hyperactivity disorder (ADHD) is the most common neurodevelopment disorder of childhood that persists into adulthood in the majority of cases. In adults, the clinical picture of ADHD is complex and comorbidity with other psychiatric disorders is the rule. The documentation that the disorder had a childhood onset and the various comorbid symptomatologies present both in childhood and adult life represent the most influential obstacles for the accurate clinical diagnosis of the disorder. In 75% of cases with adult ADHD there is at least one coexisting comorbid disorder, with anxiety and mood disorders as well as substance abuse and impulse control disorders being the most prevalent ones. Adult psychiatrists have limited experience in the diagnosis, treatment and overall management of the disorder. Greece is a member of the European Network Adult ADHD (ENAA), founded in 2003, aiming to increase awareness of the disorder and to improve knowledge and patient care for adults with ADHD across Europe. A clinic where diagnosis as well as treatment recommendations are given after a thorough assessment of adult ADHD patients, is hosted at the First Department of Psychiatry of the Athens National and Kapodistian University. The clinic is in close collaboration with ENAA. The diagnosis of ADHD is given after a detailed evaluation of the patient, based on history taken, self-administered questionnaires and a specific psychiatric interview. The reliable trace of the symptoms' onset back in early childhood, current symptomatology, as well as its impact on at least two major areas of functioning (school, home, work or personal relationships) are pivotal for the assessment procedure. Special attention should be paid in the distinction of symptoms often coexisting with the core symptoms of the ADHD, such as emotional liability, incessant mental activity, avoidance of situations like queuing, especially when there is also frustration, from those indicating a comorbid disorder, e.g. bipolar disorder, major depression, anxiety disorders or personality disorders. Its coexistence with substance abuse requires special attention, as ADHD is quite prevalent in this group. In order to treat an ADHD patient the rule is a multidimensional intervention. Comorbid psychiatric disorders must be treated first. Psychoeducation of the patient is needed in most of the cases as well as the admin istration of specific for the ADHD psychotropic medication. Coaching, Cognitive Therapy and family interventions are proved to be the most efficacious psychosocial treatments. In the context of our university outpatients' clinic an observation study for exploring the occurrence of ADHD among patients with anxiety and depressive disorders took place. 15% of patients with anxiety and depressive disorders received for the first time in their lives the diagnosis of ADHD. The above mentioned indicate the need for further training psychiatrists in the recognition and treatment of adult ADHD.
The SGI/Cray T3E: Experiences and Insights
NASA Technical Reports Server (NTRS)
Bernard, Lisa Hamet
1998-01-01
The NASA Goddard Space Flight Center is home to the fifth most powerful supercomputer in the world, a 1024 processor SGI/Cray T3E-600. The original 512 processor system was placed at Goddard in March, 1997 as part of a cooperative agreement between the High Performance Computing and Communications Program's Earth and Space Sciences Project (ESS) and SGI/Cray Research. The goal of this system is to facilitate achievement of the Project milestones of 10, 50 and 100 GFLOPS sustained performance on selected Earth and space science application codes. The additional 512 processors were purchased in March, 1998 by the NASA Earth Science Enterprise for the NASA Seasonal to Interannual Prediction Project (NSIPP). These two "halves" still operate as a single system, and must satisfy the unique requirements of both aforementioned groups, as well as guest researchers from the Earth, space, microgravity, manned space flight and aeronautics communities. Few large scalable parallel systems are configured for capability computing, so models are hard to find. This unique environment has created a challenging system administration task, and has yielded some insights into the supercomputing needs of the various NASA Enterprises, as well as insights into the strengths and weaknesses of the T3E architecture and software. The T3E is a distributed memory system in which the processing elements (PE's) are connected by a low latency, high bandwidth bidirectional 3-D torus. Due to the focus on high speed communication between PE's, the T3E requires PE's to be allocated contiguously per job. Further, jobs will only execute on the user specified number of PE's and PE timesharing is possible but impractical. With a highly varied job mix in both size and runtime of jobs, the resulting scenario is PE fragmentation and an inability to achieve near 100% utilization. SGI/Cray has provided several scheduling and configuration tools to minimize the impact of fragmentation. These tools include PScheD (the political scheduler), GRM (the global resource manager) and NQE (the Network Queuing Environment). Features and impact of these tools will be discussed, as will resulting performance and utilization data. As a distributed memory system, the T3E is designed to be programmed through explicit message passing. Consequently, certain assumptions related to code design are made by the operating system (UNICOS/mk) and its scheduling tools. With the exception of HPF, which does run on the T3E, however poorly, alternative programming styles have the potential to impact the T3E in unexpected and undesirable ways. Several examples will be presented (preceeded with the disclaimer, "Don't try this at home! Violators will be prosecuted!")
Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbach, Carsten; Frings, Wolfgang
2013-02-22
This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less
NASA Astrophysics Data System (ADS)
Klump, Jens; Robertson, Jess
2016-04-01
The spatial and temporal extent of geological phenomena makes experiments in geology difficult to conduct, if not entirely impossible and collection of data is laborious and expensive - so expensive that most of the time we cannot test a hypothesis. The aim, in many cases, is to gather enough data to build a predictive geological model. Even in a mine, where data are abundant, a model remains incomplete because the information at the level of a blasting block is two orders of magnitude larger than the sample from a drill core, and we have to take measurement errors into account. So, what confidence can we have in a model based on sparse data, uncertainties and measurement error? Our framework consist of two layers: (a) a ground-truth layer that contains geological models, which can be statistically based on historical operations data, and (b) a network of RESTful synthetic sensor microservices which can query the ground-truth for underlying properties and produce a simulated measurement to a control layer, which could be a database or LIMS, a machine learner or a companies' existing data infrastructure. Ground truth data are generated by an implicit geological model which serves as a host for nested models of geological processes as smaller scales. Our two layers are implemented using Flask and Gunicorn, which are open source Python web application framework and server, the PyData stack (numpy, scipy etc) and Rabbit MQ (an open-source queuing library). Sensor data is encoded using a JSON-LD version of the SensorML and Observations and Measurements standards. Containerisation of the synthetic sensors using Docker and CoreOS allows rapid and scalable deployment of large numbers of sensors, as well as sensor discovery to form a self-organized dynamic network of sensors. Real-time simulation of data sources can be used to investigate crucial questions such as the potential information gain from future sensing capabilities, or from new sampling strategies, or the combination of both, and it enables us to test many "what if?" questions, both in geology and in data engineering. What would we be able to see if we could obtain data at higher resolution? How would real-time data analysis change sampling strategies? Does our data infrastructure handle many new real-time data streams? What feature engineering can be deducted for machine learning approaches? By providing a 'data sandbox' able to scale to realistic geological scenarios we hope to start answering some of these questions. Faults happen in real world networks. Future work will investigate the effect of failure on dynamic sensor networks and the impact on the predictive capability of machine learning algorithms.
Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility
NASA Astrophysics Data System (ADS)
Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro
2014-06-01
In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.
Automated collection of imaging and phenotypic data to centralized and distributed data repositories
King, Margaret D.; Wood, Dylan; Miller, Brittny; Kelly, Ross; Landis, Drew; Courtney, William; Wang, Runtang; Turner, Jessica A.; Calhoun, Vince D.
2014-01-01
Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite). COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010; Scott et al., 2011). It was initially developed for the investigators at the Mind Research Network (MRN), but is now available to neuroimaging institutions worldwide. Self Assessment (SA) is an application embedded in the Assessment Manager (ASMT) tool in COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. Instruments (surveys) are created through ASMT and include many unique question types and associated SA features that can be implemented to help the flow of assessment administration. SA provides an instrument queuing system with an easy-to-use drag and drop interface for research staff to set up participants' queues. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at MRN. This data is only accessible by users that have explicit permission to access the data through their COINS user accounts and access to MRN network. This allows for high volume data collection and with minimal user access to PHI (protected health information). An added benefit to using COINS is the ability to collect, store and share imaging data and assessment data with no interaction with outside tools or programs. All study data collected (imaging and assessment) is stored and exported with a participant's unique subject identifier so there is no need to keep extra spreadsheets or databases to link and keep track of the data. Data is easily exported from COINS via the Query Builder and study portal tools, which allow fine grained selection of data to be exported into comma separated value file format for easy import into statistical programs. There is a great need for data collection tools that limit human intervention and error while at the same time providing users with intuitive design. COINS aims to be a leader in database solutions for research studies collecting data from several different modalities. PMID:24926252
King, Margaret D; Wood, Dylan; Miller, Brittny; Kelly, Ross; Landis, Drew; Courtney, William; Wang, Runtang; Turner, Jessica A; Calhoun, Vince D
2014-01-01
Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite). COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010; Scott et al., 2011). It was initially developed for the investigators at the Mind Research Network (MRN), but is now available to neuroimaging institutions worldwide. Self Assessment (SA) is an application embedded in the Assessment Manager (ASMT) tool in COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. Instruments (surveys) are created through ASMT and include many unique question types and associated SA features that can be implemented to help the flow of assessment administration. SA provides an instrument queuing system with an easy-to-use drag and drop interface for research staff to set up participants' queues. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at MRN. This data is only accessible by users that have explicit permission to access the data through their COINS user accounts and access to MRN network. This allows for high volume data collection and with minimal user access to PHI (protected health information). An added benefit to using COINS is the ability to collect, store and share imaging data and assessment data with no interaction with outside tools or programs. All study data collected (imaging and assessment) is stored and exported with a participant's unique subject identifier so there is no need to keep extra spreadsheets or databases to link and keep track of the data. Data is easily exported from COINS via the Query Builder and study portal tools, which allow fine grained selection of data to be exported into comma separated value file format for easy import into statistical programs. There is a great need for data collection tools that limit human intervention and error while at the same time providing users with intuitive design. COINS aims to be a leader in database solutions for research studies collecting data from several different modalities.
Wide-area littoral discreet observation: success at the tactical edge
NASA Astrophysics Data System (ADS)
Toth, Susan; Hughes, William; Ladas, Andrew
2012-06-01
In June 2011, the United States Army Research Laboratory (ARL) participated in Empire Challenge 2011 (EC-11). EC-11 was United States Joint Forces Command's (USJFCOM) annual live, joint and coalition intelligence, surveillance and reconnaissance (ISR) interoperability demonstration under the sponsorship of the Under Secretary of Defense for Intelligence (USD/I). EC-11 consisted of a series of ISR interoperability events, using a combination of modeling & simulation, laboratory and live-fly events. Wide-area Littoral Discreet Observation (WALDO) was ARL's maritime/littoral capability. WALDO met a USD(I) directive that EC-11 have a maritime component and WALDO was the primary player in the maritime scenario conducted at Camp Lejeune, North Carolina. The WALDO effort demonstrated the utility of a networked layered sensor array deployed in a maritime littoral environment, focusing on maritime surveillance targeting counter-drug, counter-piracy and suspect activity in a littoral or riverine environment. In addition to an embedded analytical capability, the sensor array and control infrastructure consisted of the Oriole acoustic sensor, iScout unattended ground sensor (UGS), OmniSense UGS, the Compact Radar and the Universal Distributed Management System (UDMS), which included the Proxy Skyraider, an optionally manned aircraft mounting both wide and narrow FOV EO/IR imaging sensors. The capability seeded a littoral area with riverine and unattended sensors in order to demonstrate the utility of a Wide Area Sensor (WAS) capability in a littoral environment focused on maritime surveillance activities. The sensors provided a cue for WAS placement/orbit. A narrow field of view sensor would be used to focus on more discreet activities within the WAS footprint. Additionally, the capability experimented with novel WAS orbits to determine if there are more optimal orbits for WAS collection in a littoral environment. The demonstration objectives for WALDO at EC-11 were: * Demonstrate a networked, layered, multi-modal sensor array deployed in a maritime littoral environment, focusing on maritime surveillance targeting counter-drug, counter-piracy and suspect activity * Assess the utility of a Wide Area Surveillance (WAS) sensor in a littoral environment focused on maritime surveillance activities * Demonstrate the effectiveness of using UGS sensors to cue WAS sensor tasking * Employ a narrow field of view full motion video (FMV) sensor package that is collocated with the WAS to conduct more discrete observation of potential items of interest when queued by near-real-time data from UGS or observers * Couple the ARL Oriole sensor with other modality UGS networks in a ground layer ISR capability, and incorporate data collected from aerial sensors with a GEOINT base layer to form a fused product * Swarm multiple aerial or naval platforms to prosecute single or multiple targets * Track fast moving surface vessels in littoral areas * Disseminate time sensitive, high value data to the users at the tactical edge In short we sought to answer the following question: how do you layer, control and display disparate sensors and sensor modalities in such a way as to facilitate appropriate sensor cross-cue, data integration, and analyst control to effectively monitor activity in a littoral (or novel) environment?
First Steps Toward K-12 Teacher Professional Development Using Internet-based Telescopes
NASA Astrophysics Data System (ADS)
Berryhill, K. J.; Gershun, D.; Slater, T. F.; Armstrong, J. D.
2012-12-01
How can science teachers become more familiar with emerging technology, excite their students and give students a taste of astronomy research? Astronomy teachers do not always have research experience, so it is difficult for them to convey to students how researchers use telescopes. The nature of astronomical observation (e.g., remote sites, expensive equipment, and odd hours) has been a barrier to providing teachers with insight into the process. Robotic telescopes (operated automatically with queued observing schedules) and remotely controlled telescopes (controlled by the user via the Internet) allow scientists to conduct observing sessions on research-grade telescopes half a world away. The same technology can now be harnessed by STEM educators to engage students and reinforce what is being taught in the classroom, as seen in some early research in elementary schools (McKinnon and Mainwaring 2000 and McKinnon and Geissinger 2002), and middle/high schools (Sadler et al. 2001, 2007 and Gehret et al. 2005). However, teachers need to be trained to use these resources. Responding to this need, graduate students and faculty at the University of Wyoming and CAPER Center for Astronomy & Physics Education Research are developing teacher professional development programs using Internet-based telescopes. We conducted an online course in the science education graduate program at the University of Wyoming. This course was designed to sample different types of Internet-based telescopes to evaluate them as resources for teacher professional development. The 10 participants were surveyed at the end of the course to assess their experiences with each activity. In addition, pre-test/post-test data were collected focusing specifically on one of the telescopes (Gershun, Berryhill and Slater 2012). Throughout the course, the participants learned to use a variety of robotic and remote telescopes including SLOOH Space Camera (www.slooh.com), Sky Titan Observatory (www.skytitan.org), Faulkes Telescope North (FTN—part of Las Cumbres Observatory Global Telescope Network—www.lcogt.net), and the MicroObservatory Robotic Telescope Network (http://mo-www.cfa.harvard.edu/MicroObservatory). As is common in astronomy observation, the class experienced setbacks to observing plans from a variety of sources, including clouds, dust storms, wind, instrument malfunctions, and light pollution from a nearby rodeo. Participants requested observations on robotic telescopes and directly controlled remote telescopes (FTN and Sky Titan). Data from the surveys suggest the theme that the ability to control telescopes in real time is of significant educational value, despite 6 of 10 participants citing frustrations due to equipment malfunctions and weather. Future courses will need backup plans or dates to account for the possibility of lost observing time. Participants used a variety of software tools to analyze data. Survey data showed the LCOGT Agent Exoplanet citizen science exercise to be an important learning event in the progression toward using SalsaJ to create exoplanet light curves from FTN data. Much of the data from FTN and Sky Titan used by participants was not collected during the observing runs due to issues noted above. The telescope operators provided previous data for analysis. None of the evidence we collected indicates that this lack of direct linkage is a problem.
IDCDACS: IDC's Distributed Application Control System
NASA Astrophysics Data System (ADS)
Ertl, Martin; Boresch, Alexander; Kianička, Ján; Sudakov, Alexander; Tomuta, Elena
2015-04-01
The Preparatory Commission for the CTBTO is an international organization based in Vienna, Austria. Its mission is to establish a global verification regime to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which bans all nuclear explosions. For this purpose time series data from a global network of seismic, hydro-acoustic and infrasound (SHI) sensors are transmitted to the International Data Centre (IDC) in Vienna in near-real-time, where it is processed to locate events that may be nuclear explosions. We newly designed the distributed application control system that glues together the various components of the automatic waveform data processing system at the IDC (IDCDACS). Our highly-scalable solution preserves the existing architecture of the IDC processing system that proved successful over many years of operational use, but replaces proprietary components with open-source solutions and custom developed software. Existing code was refactored and extended to obtain a reusable software framework that is flexibly adaptable to different types of processing workflows. Automatic data processing is organized in series of self-contained processing steps, each series being referred to as a processing pipeline. Pipelines process data by time intervals, i.e. the time-series data received from monitoring stations is organized in segments based on the time when the data was recorded. So-called data monitor applications queue the data for processing in each pipeline based on specific conditions, e.g. data availability, elapsed time or completion states of preceding processing pipelines. IDCDACS consists of a configurable number of distributed monitoring and controlling processes, a message broker and a relational database. All processes communicate through message queues hosted on the message broker. Persistent state information is stored in the database. A configurable processing controller instantiates and monitors all data processing applications. Due to decoupling by message queues the system is highly versatile and failure tolerant. The implementation utilizes the RabbitMQ open-source messaging platform that is based upon the Advanced Message Queuing Protocol (AMQP), an on-the-wire protocol (like HTML) and open industry standard. IDCDACS uses high availability capabilities provided by RabbitMQ and is equipped with failure recovery features to survive network and server outages. It is implemented in C and Python and is operated in a Linux environment at the IDC. Although IDCDACS was specifically designed for the existing IDC processing system its architecture is generic and reusable for different automatic processing workflows, e.g. similar to those described in (Olivieri et al. 2012, Kværna et al. 2012). Major advantages are its independence of the specific data processing applications used and the possibility to reconfigure IDCDACS for different types of processing, data and trigger logic. A possible future development would be to use the IDCDACS framework for different scientific domains, e.g. for processing of Earth observation satellite data extending the one-dimensional time-series intervals to spatio-temporal data cubes. REFERENCES Olivieri M., J. Clinton (2012) An almost fair comparison between Earthworm and SeisComp3, Seismological Research Letters, 83(4), 720-727. Kværna, T., S. J. Gibbons, D. B. Harris, D. A. Dodge (2012) Adapting pipeline architectures to track developing aftershock sequences and recurrent explosions, Proceedings of the 2012 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies, 776-785.
NASA Astrophysics Data System (ADS)
Diaz-Elsayed, Nancy
Between 2008 and 2035 global energy demand is expected to grow by 53%. While most industry-level analyses of manufacturing in the United States (U.S.) have traditionally focused on high energy consumers such as the petroleum, chemical, paper, primary metal, and food sectors, the remaining sectors account for the majority of establishments in the U.S. Specifically, of the establishments participating in the Energy Information Administration's Manufacturing Energy Consumption Survey in 2006, the non-energy intensive" sectors still consumed 4*109 GJ of energy, i.e., one-quarter of the energy consumed by the manufacturing sectors, which is enough to power 98 million homes for a year. The increasing use of renewable energy sources and the introduction of energy-efficient technologies in manufacturing operations support the advancement towards a cleaner future, but having a good understanding of how the systems and processes function can reduce the environmental burden even further. To facilitate this, methods are developed to model the energy of manufacturing across three hierarchical levels: production equipment, factory operations, and industry; these methods are used to accurately assess the current state and provide effective recommendations to further reduce energy consumption. First, the energy consumption of production equipment is characterized to provide machine operators and product designers with viable methods to estimate the environmental impact of the manufacturing phase of a product. The energy model of production equipment is tested and found to have an average accuracy of 97% for a product requiring machining with a variable material removal rate profile. However, changing the use of production equipment alone will not result in an optimal solution since machines are part of a larger system. Which machines to use, how to schedule production runs while accounting for idle time, the design of the factory layout to facilitate production, and even the machining parameters --- these decisions affect how much energy is utilized during production. Therefore, at the facility level a methodology is presented for implementing priority queuing while accounting for a high product mix in a discrete event simulation environment. A baseline case is presented and alternative factory designs are suggested, which lead to energy savings of approximately 9%. At the industry level, the majority of energy consumption for manufacturing facilities is utilized for machine drive, process heating, and HVAC. Numerous studies have characterized the energy of manufacturing processes and HVAC equipment, but energy data is often limited for a facility in its entirety since manufacturing companies often lack the appropriate sensors to track it and are hesitant to release this information for confidentiality purposes. Without detailed information about the use of energy in manufacturing sites, the scope of factory studies cannot be adequately defined. Therefore, the breakdown of energy consumption of sectors with discrete production is presented, as well as a case study assessing the electrical energy consumption, greenhouse gas emissions, their associated costs, and labor costs for selected sites in the United States, Japan, Germany, China, and India. By presenting energy models and assessments of production equipment, factory operations, and industry, this dissertation provides a comprehensive assessment of energy trends in manufacturing and recommends methods that can be used beyond these case studies and industries to reduce consumption and contribute to an energy-efficient future.
Industrializing Offshore Wind Power with Serial Assembly and Lower-cost Deployment - Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kempton, Willett
A team of engineers and contractors has developed a method to move offshore wind installation toward lower cost, faster deployment, and lower environmental impact. A combination of methods, some incremental and some breaks from past practice, interact to yield multiple improvements. Three designs were evaluated based on detailed engineering: 1) a 5 MW turbine on a jacket with pin piles (base case), 2) a 10 MW turbine on a conventional jacket with pin piles, assembled at sea, and 3) a 10 MW turbine on tripod jacket with suction buckets (caissons) and with complete turbine assembly on-shore. The larger turbine, assemblymore » ashore, and the use of suction buckets together substantially reduce capital cost of offshore wind projects. Notable capital cost reductions are: changing from 5 MW to 10 MW turbine, a 31% capital cost reduction, and assembly on land then single-piece install at sea an additional 9% capital cost reduction. An estimated Design 4) estimates further cost reduction when equipment and processes of Design 3) are optimized, rather than adapted to existing equipment and process. Cost of energy for each of the four Designs are also calculated, yielding approximately the same percentage reductions. The methods of Design 3) analyzed here include accepted structures such as suction buckets used in new ways, innovations conceived but previously without engineering and economic validation, combined with new methods not previously proposed. Analysis of Designs 2) and 3) are based on extensive engineering calculations and detailed cost estimates. All design methods can be done with existing equipment, including lift equipment, ports and ships (except that design 4 assumes a more optimized ship). The design team consists of experienced offshore structure designers, heavy lift engineers, wind turbine designers, vessel operators, and marine construction contractors. Comparing the methods based on criteria of cost and deployment speed, the study selected the third design. That design is, in brief: a conventional turbine and tubular tower is mounted on a tripod jacket, in turn atop three suction buckets. Blades are mounted on the tower, not on the hub. The entire structure is built in port, from the bottom up, then assembled structures are queued in the port for deployment. During weather windows, the fully-assembled structures are lifted off the quay, lashed to the vessel, and transported to the deployment site. The vessel analyzed is a shear leg crane vessel with dynamic positioning like the existing Gulliver, or it could be a US-built crane barge. On site, the entire structure is lowered to the bottom by the crane vessel, then pumping of the suction buckets is managed by smaller service vessels. Blades are lifted into place by small winches operated by workers in the nacelle without lift vessel support. Advantages of the selected design include: cost and time at sea of the expensive lift vessel are significantly reduced; no jack up vessel is required; the weather window required for each installation is shorter; turbine structure construction is continuous with a queue feeding the weather-dependent installation process; pre-installation geotechnical work is faster and less expensive; there are no sound impacts on marine mammals, thus minimal spotting and no work stoppage Industrializing Offshore Wind Power 6 of 96 9 for mammal passage; the entire structure can be removed for decommissioning or major repairs; the method has been validated for current turbines up to 10 MW, and a calculation using simple scaling shows it usable up to 20 MW turbines.« less
Improving and streamlining the workflow in the graphic arts and printing industry
NASA Astrophysics Data System (ADS)
Tuijn, Chris
2003-01-01
In order to survive in the economy of today, an ever-increasing productivity is required from all the partners participating in a specific business process. This is not different for the printing industry. One of the ways to remain profitable is, on one hand, to reduce costs by automation and aiming for large-scale projects and, on the other hand, to specialize and become an expert in the area in which one is active. One of the ways to realize these goals is by streamlining the communication of the different partners and focus on the core business. If we look at the graphic arts and printing industry, we can identify different important players that eventually help in the realization of printed material. For the printing company (as is the case for any other company), the most important player is the customer. This role can be adopted by many different players including publishers, companies, non-commercial institutions, private persons etc. Sometimes, the customer will be the content provider as well but this is not always the case. Often, the content is provided by other organizations such as design and prepress agencies, advertising companies etc. In most printing organizations, the customer has one contact person often referred to as the CSR (Customers Service Representative). Other people involved at the printing organization include the sales representatives, prepress operators, printing operators, postpress operators, planners, the logistics department, the financial department etc. In the first part of this article, we propose a solution that will improve the communication between all the different actors in the graphic arts and printing industry considerably and will optimize and streamline the overall workflow as well. This solution consists of an environment in which the customer can communicate with the CSR to ask for a quote based on a specific product intent; the CSR will then (after the approval from the customer's side) organize the work and brief his technical managers to realize the product. Furthermore, the system will allow managers to brief the actors and follow up on the progress. At all times, the CSR's - as well as the customers - will be able to look at the over-all status of a specific product. If required, the customers can approve the content over the web; the system will also support local and remote proofing. In the second part of this article, we will focus on the technical environment that can be used to create such a system. To this end, we propose the use of a multi-tier server architecture based on Sun"s J2EE platform. Since our system performs a communicating role by nature, it will have to interface in a smart way with a lot of external systems such as prepress systems, MIS systems, mail servers etc. In order to allow a robust communication between the server and its subsystems that avoids a failure of the over-all system if one of the components goes down, we have chosen for a non-blocking, asynchronous communication method based on queuing systems. In order to support an easy integration with other systems in the graphic industry, we also will describe how our communication server supports the JDF standard, a new standard in the graphic industry established by the CIP4 committee.
Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.; Schulz, M.
2010-04-01
We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.
NASA Astrophysics Data System (ADS)
de Laat, Cees; Develder, Chris; Jukan, Admela; Mambretti, Joe
This topic is devoted to communication issues in scalable compute and storage systems, such as parallel computers, networks of workstations, and clusters. All aspects of communication in modern systems were solicited, including advances in the design, implementation, and evaluation of interconnection networks, network interfaces, system and storage area networks, on-chip interconnects, communication protocols, routing and communication algorithms, and communication aspects of parallel and distributed algorithms. In total 15 papers were submitted to this topic of which we selected the 7 strongest papers. We grouped the papers in two sessions of 3 papers each and one paper was selected for the best paper session. We noted a number of papers dealing with changing topologies, stability and forwarding convergence in source routing based cluster interconnect network architectures. We grouped these for the first session. The authors of the paper titled: “Implementing a Change Assimilation Mechanism for Source Routing Interconnects” propose a mechanism that can obtain the new topology, and compute and distribute a new set of fabric paths to the source routed network end points to minimize the impact on the forwarding service. The article entitled “Dependability Analysis of a Fault-tolerant Network Reconfiguration Strateg” reports on a case study analyzing the effects of network size, mean time to node failure, mean time to node repair, mean time to network repair and coverage of the failure when using a 2D mesh network with a fault-tolerant mechanism (similar to the one used in the BlueGene/L system), that is able to remove rows and/or columns in the presence of failures. The last paper in this session: “RecTOR: A New and Efficient Method for Dynamic Network Reconfiguration” presents a new dynamic reconfiguration method, that ensures deadlock-freedom during the reconfiguration without causing performance degradation such as increased latency or decreased throughput. The second session groups 3 papers presenting methods, protocols and architectures that enhance capacities in the Networks. The paper titled: “NIC-assisted Cache-Efficient Receive Stack for Message Passing over Ethernet” presents the addition of multiqueue support in the Open-MX receive stack so that all incoming packets for the same process are treated on the same core. It then introduces the idea of binding the target end process near its dedicated receive queue. In general this multiqueue receive stack performs better than the original single queue stack, especially on large communication patterns where multiple processes are involved and manual binding is difficult. The authors of: “A Multipath Fault-Tolerant Routing Method for High-Speed Interconnection Networks” focus on the problem of fault tolerance for high-speed interconnection networks by designing a fault tolerant routing method. The goal was to solve a certain number of link and node failures, considering its impact, and occurrence probability. Their experiments show that their method allows applications to successfully finalize their execution in the presence of several faults, with an average performance value of 97% with respect to the fault-free scenarios. The paper: “Hardware implementation study of the Self-Clocked Fair Queuing Credit Aware (SCFQ-CA) and Deficit Round Robin Credit Aware (DRR-CA) scheduling algorithms” proposes specific implementations of the two schedulers taking into account the characteristics of current high-performance networks. A comparison is presented on the complexity of these two algorithms in terms of silicon area and computation delay. Finally we selected one paper for the special paper session: “A Case Study of Communication Optimizations on 3D Mesh Interconnects”. In this paper the authors present topology aware mapping as a technique to optimize communication on 3-dimensional mesh interconnects and hence improve performance. Results are presented for OpenAtom on up to 16,384 processors of Blue Gene/L, 8,192 processors of Blue Gene/P and 2,048 processors of Cray XT3.
Europe's latest space telescope is off to a good start
NASA Astrophysics Data System (ADS)
1999-12-01
The world's most powerful observatory for X-ray astronomy, the European Space Agency's XMM satellite, set off into space from Kourou, French Guiana, at 15:32 Paris time on 10 December. The mighty Ariane 5 launcher, making its very first commercial launch, hurled the 3.9-tonne spacecraft into a far-ranging orbit. Within one hour of lift-off the European Space Operations Centre at Darmstadt, Germany, confirmed XMM was under control with electrical power available from the solar arrays. "XMM is the biggest and most innovative scientific spacecraft developed by ESA so far," said Roger Bonnet, ESA's Director of Science. "The world's space agencies now want the new technology that ESA and Europe's industries have put into XMM's amazingly sensitive X-ray telescopes. And the world's astronomers are queuing up to use XMM to explore the hottest places in the universe. We must ask them to be patient while we get XMM fully commissioned." XMM's initial orbit carries it far into space, to 114,000 kilometres from the Earth at its most distant point. On its return the satellite's closest approach, or perigee, will be at 850 kilometres. The next phase of the operation, expected to take about a week, will raise that perigee to 7000 kilometres by repeated firing of XMM's own thrusters. The spacecraft will then be on its intended path, spending 40 hours out of every 48-hour orbit clear of the radiation belts which spoil the view of the X-ray universe. Technical commissioning and verification of the performance of the telescopes and scientific instruments will then follow. XMM should be fully operational for astronomy in the spring of 2000. All of ESA's science missions present fresh technological challenges to Europe's aerospace industries. In building XMM, the prime contractor Dornier Satellitensysteme in Friedrichshafen in Germany (part of DaimlerChrysler Aerospace) has led an industrial consortium involving 46 companies from 14 European countries and one in the United States. XMM stands for X-ray Multi-Mirror Mission. Its main telescopes will gather X-rays from the cosmos with 120 square metres of gold-coated surfaces, in 174 mirrors fashioned, smoothed and nested together with high precision by contractors in Germany and Italy. With XMM, Europe has taken the lead in X-ray missions and X-ray detectors: the most sensitive and largest ever made. The four complex scientific instruments on XMM have been developed and led by European scientists with participation from institutes worldwide. Compared with NASA's Chandra X-ray telescope launched earlier this year, XMM is at least 5 times more sensitive. The gain in sensitivity is 15-fold, at high X-ray energies. But Chandra has a sharper view, so the two missions are complementary and there is close transatlantic collaboration among the scientists involved. Prime scientific objectives for XMM are to find out exactly what goes on in the vicinity of black holes, and to help to clear up the mystery of the stupendous explosions called gamma-ray bursts. Other hot topics for investigation include cannibalism among the stars, the release of newly made chemical elements from stellar explosions, and the origin of the cosmic rays that rain on the Earth. XMM is one of a carefully-planned series of scientific satellites built in Europe by which ESA has established a pioneering role in space astronomy. Recently completed missions include the very successful star-mapping satellite Hipparcos, and the Infrared Space Observatory which revolutionized astronomers' knowledge of the cool parts of the universe. Coming along after XMM are Integral for gamma-ray astronomy, FIRST for the far-infrared, and Planck for examining the entire cosmic microwave background far more accurately than ever before.
Clear New View of a Classic Spiral
NASA Astrophysics Data System (ADS)
2010-05-01
ESO is releasing a beautiful image of the nearby galaxy Messier 83 taken by the HAWK-I instrument on ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile. The picture shows the galaxy in infrared light and demonstrates the impressive power of the camera to create one of the sharpest and most detailed pictures of Messier 83 ever taken from the ground. The galaxy Messier 83 (eso0825) is located about 15 million light-years away in the constellation of Hydra (the Sea Serpent). It spans over 40 000 light-years, only 40 percent the size of the Milky Way, but in many ways is quite similar to our home galaxy, both in its spiral shape and the presence of a bar of stars across its centre. Messier 83 is famous among astronomers for its many supernovae: vast explosions that end the lives of some stars. Over the last century, six supernovae have been observed in Messier 83 - a record number that is matched by only one other galaxy. Even without supernovae, Messier 83 is one of the brightest nearby galaxies, visible using just binoculars. Messier 83 has been observed in the infrared part of the spectrum using HAWK-I [1], a powerful camera on ESO's Very Large Telescope (VLT). When viewed in infrared light most of the obscuring dust that hides much of Messier 83 becomes transparent. The brightly lit gas around hot young stars in the spiral arms is also less prominent in infrared pictures. As a result much more of the structure of the galaxy and the vast hordes of its constituent stars can be seen. This clear view is important for astronomers looking for clusters of young stars, especially those hidden in dusty regions of the galaxy. Studying such star clusters was one of the main scientific goals of these observations [2]. When compared to earlier images, the acute vision of HAWK-I reveals far more stars within the galaxy. The combination of the huge mirror of the VLT, the large field of view and great sensitivity of the camera, and the superb observing conditions at ESO's Paranal Observatory makes HAWK-I one of the most powerful near-infrared imagers in the world. Astronomers are eagerly queuing up for the chance to use the camera, which began operation in 2007 (eso0736), and to get some of the best ground-based infrared images ever of the night sky. Notes [1] HAWK-I stands for High-Acuity Wide-field K-band Imager. More technical details about the camera can be found in an earlier press release (eso0736). [2] The data used to prepare this image were acquired by a team led by Mark Gieles (University of Cambridge) and Yuri Beletsky (ESO). Mischa Schirmer (University of Bonn) performed the challenging data processing. More information ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
ESA's Multi-mission Sentinel-1 Toolbox
NASA Astrophysics Data System (ADS)
Veci, Luis; Lu, Jun; Foumelis, Michael; Engdahl, Marcus
2017-04-01
The Sentinel-1 Toolbox is a new open source software for scientific learning, research and exploitation of the large archives of Sentinel and heritage missions. The Toolbox is based on the proven BEAM/NEST architecture inheriting all current NEST functionality including multi-mission support for most civilian satellite SAR missions. The project is funded through ESA's Scientific Exploitation of Operational Missions (SEOM). The Sentinel-1 Toolbox will strive to serve the SEOM mandate by providing leading-edge software to the science and application users in support of ESA's operational SAR mission as well as by educating and growing a SAR user community. The Toolbox consists of a collection of processing tools, data product readers and writers and a display and analysis application. A common architecture for all Sentinel Toolboxes is being jointly developed by Brockmann Consult, Array Systems Computing and C-S called the Sentinel Application Platform (SNAP). The SNAP architecture is ideal for Earth Observation processing and analysis due the following technological innovations: Extensibility, Portability, Modular Rich Client Platform, Generic EO Data Abstraction, Tiled Memory Management, and a Graph Processing Framework. The project has developed new tools for working with Sentinel-1 data in particular for working with the new Interferometric TOPSAR mode. TOPSAR Complex Coregistration and a complete Interferometric processing chain has been implemented for Sentinel-1 TOPSAR data. To accomplish this, a coregistration following the Spectral Diversity[4] method has been developed as well as special azimuth handling in the coherence, interferogram and spectral filter operators. The Toolbox includes reading of L0, L1 and L2 products in SAFE format, calibration and de-noising, slice product assembling, TOPSAR deburst and sub-swath merging, terrain flattening radiometric normalization, and visualization for L2 OCN products. The Toolbox also provides several new tools for exploitation of polarimetric data including speckle filters, decompositions, and classifiers. The Toolbox will also include tools for large data stacks, supervised and unsupervised classification, improved vector handling and change detection. Architectural improvements such as smart memory configuration, task queuing, and optimizations for complex data will provide better support and performance for very large products and stacks.In addition, a Cloud Exploitation Platform Extension (CEP) has been developed to add the capability to smoothly utilize a cloud computing platform where EO data repositories and high performance processing capabilities are available. The extension to the SENTINEL Application Platform would facilitate entry into cloud processing services for supporting bulk processing on high performance clusters. Since December 2016, the COMET-LiCS InSAR portal (http://comet.nerc.ac.uk/COMET-LiCS-portal/) has been live, delivering interferograms and coherence estimates over the entire Alpine-Himalayan belt. The portal already contains tens of thousands of products, which can be browsed in a user-friendly portal, and downloaded for free by the general public. For our processing, we use the facilities at the Climate and Environmental Monitoring from Space (CEMS). Here we have large storage and processing facilities to our disposal, and a complete duplicate of the Sentinel-1 archive is maintained. This greatly simplifies the infrastructure we had to develop for automated processing of large areas. Here we will give an overview of the current status of the processing system, as well as discuss future plans. We will cover the infrastructure we developed to automatically produce interferograms and its challenges, and the processing strategy for time series analysis. We will outline the objectives of the system in the near and distant future, and a roadmap for its continued development. Finally, we will highlight some of the scientific results and projects linked to the system.
Delivery arrangements for health systems in low-income countries: an overview of systematic reviews
Ciapponi, Agustín; Lewin, Simon; Herrera, Cristian A; Opiyo, Newton; Pantoja, Tomas; Paulsen, Elizabeth; Rada, Gabriel; Wiysonge, Charles S; Bastías, Gabriel; Dudley, Lilian; Flottorp, Signe; Gagnon, Marie-Pierre; Garcia Marti, Sebastian; Glenton, Claire; Okwundu, Charles I; Peñaloza, Blanca; Suleman, Fatima; Oxman, Andrew D
2017-01-01
Background Delivery arrangements include changes in who receives care and when, who provides care, the working conditions of those who provide care, coordination of care amongst different providers, where care is provided, the use of information and communication technology to deliver care, and quality and safety systems. How services are delivered can have impacts on the effectiveness, efficiency and equity of health systems. This broad overview of the findings of systematic reviews can help policymakers and other stakeholders identify strategies for addressing problems and improve the delivery of services. Objectives To provide an overview of the available evidence from up-to-date systematic reviews about the effects of delivery arrangements for health systems in low-income countries. Secondary objectives include identifying needs and priorities for future evaluations and systematic reviews on delivery arrangements and informing refinements of the framework for delivery arrangements outlined in the review. Methods We searched Health Systems Evidence in November 2010 and PDQ-Evidence up to 17 December 2016 for systematic reviews. We did not apply any date, language or publication status limitations in the searches. We included well-conducted systematic reviews of studies that assessed the effects of delivery arrangements on patient outcomes (health and health behaviours), the quality or utilisation of healthcare services, resource use, healthcare provider outcomes (such as sick leave), or social outcomes (such as poverty or employment) and that were published after April 2005. We excluded reviews with limitations important enough to compromise the reliability of the findings. Two overview authors independently screened reviews, extracted data, and assessed the certainty of evidence using GRADE. We prepared SUPPORT Summaries for eligible reviews, including key messages, 'Summary of findings' tables (using GRADE to assess the certainty of the evidence), and assessments of the relevance of findings to low-income countries. Main results We identified 7272 systematic reviews and included 51 of them in this overview. We judged 6 of the 51 reviews to have important methodological limitations and the other 45 to have only minor limitations. We grouped delivery arrangements into eight categories. Some reviews provided more than one comparison and were in more than one category. Across these categories, the following intervention were effective; that is, they have desirable effects on at least one outcome with moderate- or high-certainty evidence and no moderate- or high-certainty evidence of undesirable effects. Who receives care and when: queuing strategies and antenatal care to groups of mothers. Who provides care: lay health workers for caring for people with hypertension, lay health workers to deliver care for mothers and children or infectious diseases, lay health workers to deliver community-based neonatal care packages, midlevel health professionals for abortion care, social support to pregnant women at risk, midwife-led care for childbearing women, non-specialist providers in mental health and neurology, and physician-nurse substitution. Coordination of care: hospital clinical pathways, case management for people living with HIV and AIDS, interactive communication between primary care doctors and specialists, hospital discharge planning, adding a service to an existing service and integrating delivery models, referral from primary to secondary care, physician-led versus nurse-led triage in emergency departments, and team midwifery. Where care is provided: high-volume institutions, home-based care (with or without multidisciplinary team) for people living with HIV and AIDS, home-based management of malaria, home care for children with acute physical conditions, community-based interventions for childhood diarrhoea and pneumonia, out-of-facility HIV and reproductive health services for youth, and decentralised HIV care. Information and communication technology: mobile phone messaging for patients with long-term illnesses, mobile phone messaging reminders for attendance at healthcare appointments, mobile phone messaging to promote adherence to antiretroviral therapy, women carrying their own case notes in pregnancy, interventions to improve childhood vaccination. Quality and safety systems: decision support with clinical information systems for people living with HIV/AIDS. Complex interventions (cutting across delivery categories and other health system arrangements): emergency obstetric referral interventions. Authors' conclusions A wide range of strategies have been evaluated for improving delivery arrangements in low-income countries, using sound systematic review methods in both Cochrane and non-Cochrane reviews. These reviews have assessed a range of outcomes. Most of the available evidence focuses on who provides care, where care is provided and coordination of care. For all the main categories of delivery arrangements, we identified gaps in primary research related to uncertainty about the applicability of the evidence to low-income countries, low- or very low-certainty evidence or a lack of studies. Effects of delivery arrangements for health systems in low-income countries What is the aim of this overview? The aim of this Cochrane Overview is to provide a broad summary of what is known about the effects of delivery arrangements for health systems in low-income countries. This overview is based on 51 systematic reviews. These systematic reviews searched for studies that evaluated different types of delivery arrangements. The reviews included a total of 850 studies. This overview is one of a series of four Cochrane Overviews that evaluate health system arrangements. What was studied in the overview? Delivery arrangements include changes in who receives care and when, who provides care, the working conditions of those who provide care, coordination of care amongst different health care providers, where care is provided, the use of information and communication technology to deliver care, and quality and safety systems. How services are delivered can have impacts on the effectiveness, efficiency and equity of health systems. This overview can help policymakers and other stakeholders to identify evidence-informed strategies to improve the delivery of services. What are the main results of the overview? When focusing only on evidence assessed as high to moderate certainty, the overview points to a number of delivery arrangements that had at least one desirable outcome and no evidence of any undesirable outcomes. These include the following: Who receives care and when - Queuing strategies - Group antenatal care Who provides care – role expansion or task shifting - Lay or community health workers supporting the care of people with hypertension - Community-based neonatal packages that include additional training of outreach workers - Lay health workers to deliver care for mothers and children or for infectious diseases - Mid-level, non-physician providers for abortion care - Health workers providing social support during at-risk pregnancies - Midwife-led care for childbearing women and their infants - Non-specialist health workers or other professionals with health roles to help people with mental, neurological and substance-abuse disorders - Nurses substituting for physicians in providing care Coordination of care - Structured multidisciplinary care plans (care pathways) used by health care providers in hospitals to detail essential steps in the care of people with a specific clinical problem - Interactive communication between collaborating primary care physicians and specialist physicians in outpatient care - Planning to facilitate patients’ discharge from hospital to home - Adding a new health service to an existing service and integrating services in health care delivery - Integrating vaccination with other healthcare services - Using physicians rather than nurses to lead triage in emergency departments - Groups or teams of midwives providing care for a group of women during pregnancy and childbirth and after childbirth Where care is provided – site of service delivery - Clinics or hospitals that manage a high volume of people living with HIV and AIDS rather than smaller volumes - Intensive home-based care for people living with HIV and AIDS - Home-based management of malaria in children - Providing care closer to home for children with long-term health conditions - Community-based interventions using lay health workers for childhood diarrhoea and pneumonia - Youth HIV and reproductive health services provided outside of health facilities - Decentralising care for initiation and maintenance of HIV and AIDS medicine treatment to peripheral health centres or lower levels of healthcare Information and communication technology - Mobile phone messaging for people with long-term illnesses - Mobile phone messaging reminders for attendance at healthcare appointments - Mobile phone messaging to promote adherence to antiretroviral therapy - Women carrying their own case notes in pregnancy - Information and communication interventions to improve childhood vaccination coverage Quality and safety systems - Establishing clinical information systems to organize patient data for people living with HIV and AIDS Packages that include multiple interventions - Interventions to improve referral for emergency care during pregnancy and childbirth How up to date is this overview? The overview authors searched for systematic reviews that had been published up to 17 December 2016. PMID:28901005
A strategic planning approach for operational-environmental tradeoff assessments in terminal areas
NASA Astrophysics Data System (ADS)
Jimenez, Hernando
This thesis proposes the use of well established statistical analysis techniques, leveraging on recent developments in interactive data visualization capabilities, to quantitatively characterize the interactions, sensitivities, and tradeoffs prevalent in the complex behavior of airport operational and environmental performance. Within the strategic airport planning process, this approach is used in the assessment of airport performance under current/reference conditions, as well as in the evaluation of terminal area solutions under projected demand conditions. More specifically, customized designs of experiments are utilized to guide the intelligent selection and definition of modeling and simulation runs that will yield greater understanding, insight, and information about the inherent systemic complexity of a terminal area, with minimal computational expense. For the research documented in this thesis, a modeling and simulation environment was created featuring three primary components. First, a generator of schedules of operations, based primarily on previous work on aviation demand characterization, whereby growth factors and scheduling adjustment algorithms are applied on appropriate baseline schedules so as to generate notional operational sets representative of consistent future demand conditions. The second component pertains to the modeling and simulation of aircraft operations, defined by a schedule of operations, on the airport surface and within its terminal airspace. This component is a discrete event simulator for multiple queuing models that captures the operational architecture of the entire terminal area along with all the necessary operational logic pertaining to simulated Air Traffic Control (ATC) functions, rules, and standard practices. The third and final component is comprised of legacy aircraft performance, emissions and dispersion, and noise exposure modeling tools, that use the simulation history of aircraft movements to generate estimates of fuel burn, emissions, and noise. The implementation of the proposed approach for the assessment of terminal area solutions incorporates the use of discrete response surface equations, and eliminates the use of quadratic terms that have no practical significance in this context. Rather, attention is entire placed on the main effects of different terminal area solutions, namely additional airport infrastructure, operational improvements, and advanced aircraft concepts, modeled as discrete independent variables for the regression model. Results reveal that an additional runway and a new international terminal, as well as reduced aircraft separation, have a major effect on all operational metrics of interest. In particular, the additional runway has a dominant effect for departure delay metrics and gate hold periods, with moderate interactions with respect to separation reduction. On the other hand, operational metrics for arrivals are co-dependent on additional infrastructure and separation reduction, featuring marginal improvements whenever these two solutions are implemented in isolation, but featuring a dramatic compounding effect when implemented in combination. The magnitude of these main effects for departures and of the interaction between these solutions for arrivals is confirmed through appropriate statistical significance testing. Finally, the inclusion o advanced aircraft concepts is shown to be most beneficial for airborne arrival operations and to a lesser extent for arrival ground movements. More specifically, advanced aircraft concepts were found to be primarily responsible for reductions in volatile organic compounds, unburned hydrocarbons, and particulate matter in this flight regime, but featured relevant interactions with separation reduction and additional airport infrastructure. To address the selection of scenarios for strategic airport planning, a technique for risk-based scenario construction, evaluation, and selection is proposed, incorporating n-dimensional dependence tree probability approximations into a morphological analysis approach. This approach to scenario construction and downselection is a distinct and novel contribution to the scenario planning field as it provides a mathematically and explicitly testable definition for an H parameter, contrasting with the qualitative alternatives in the current state of the art, which can be used in morphological analysis for scenario construction and downselection. By demonstrating that dependence tree probability product approximations are an adequate aggregation function, probability can be used for scenario construction and downselection without any mathematical or methodological restriction on the resolution of the probability scale or the number of morphological alternatives that have previously plagued probabilization and scenario downselection approaches. In addition, this approach requires expert input elicitation that is comparable or less than the current state of the art practices. (Abstract shortened by UMI.)
Cooperative Data Sharing: Simple Support for Clusters of SMP Nodes
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Balley, David H. (Technical Monitor)
1997-01-01
Libraries like PVM and MPI send typed messages to allow for heterogeneous cluster computing. Lower-level libraries, such as GAM, provide more efficient access to communication by removing the need to copy messages between the interface and user space in some cases. still lower-level interfaces, such as UNET, get right down to the hardware level to provide maximum performance. However, these are all still interfaces for passing messages from one process to another, and have limited utility in a shared-memory environment, due primarily to the fact that message passing is just another term for copying. This drawback is made more pertinent by today's hybrid architectures (e.g. clusters of SMPs), where it is difficult to know beforehand whether two communicating processes will share memory. As a result, even portable language tools (like HPF compilers) must either map all interprocess communication, into message passing with the accompanying performance degradation in shared memory environments, or they must check each communication at run-time and implement the shared-memory case separately for efficiency. Cooperative Data Sharing (CDS) is a single user-level API which abstracts all communication between processes into the sharing and access coordination of memory regions, in a model which might be described as "distributed shared messages" or "large-grain distributed shared memory". As a result, the user programs to a simple latency-tolerant abstract communication specification which can be mapped efficiently to either a shared-memory or message-passing based run-time system, depending upon the available architecture. Unlike some distributed shared memory interfaces, the user still has complete control over the assignment of data to processors, the forwarding of data to its next likely destination, and the queuing of data until it is needed, so even the relatively high latency present in clusters can be accomodated. CDS does not require special use of an MMU, which can add overhead to some DSM systems, and does not require an SPMD programming model. unlike some message-passing interfaces, CDS allows the user to implement efficient demand-driven applications where processes must "fight" over data, and does not perform copying if processes share memory and do not attempt concurrent writes. CDS also supports heterogeneous computing, dynamic process creation, handlers, and a very simple thread-arbitration mechanism. Additional support for array subsections is currently being considered. The CDS1 API, which forms the kernel of CDS, is built primarily upon only 2 communication primitives, one process initiation primitive, and some data translation (and marshalling) routines, memory allocation routines, and priority control routines. The entire current collection of 28 routines provides enough functionality to implement most (or all) of MPI 1 and 2, which has a much larger interface consisting of hundreds of routines. still, the API is small enough to consider integrating into standard os interfaces for handling inter-process communication in a network-independent way. This approach would also help to solve many of the problems plaguing other higher-level standards such as MPI and PVM which must, in some cases, "play OS" to adequately address progress and process control issues. The CDS2 API, a higher level of interface roughly equivalent in functionality to MPI and to be built entirely upon CDS1, is still being designed. It is intended to add support for the equivalent of communicators, reduction and other collective operations, process topologies, additional support for process creation, and some automatic memory management. CDS2 will not exactly match MPI, because the copy-free semantics of communication from CDS1 will be supported. CDS2 application programs will be free to carefully also use CDS1. CDS1 has been implemented on networks of workstations running unmodified Unix-based operating systems, using UDP/IP and vendor-supplied high- performance locks. Although its inter-node performance is currently unimpressive due to rudimentary implementation technique, it even now outperforms highly-optimized MPI implementation on intra-node communication due to its support for non-copy communication. The similarity of the CDS1 architecture to that of other projects such as UNET and TRAP suggests that the inter-node performance can be increased significantly to surpass MPI or PVM, and it may be possible to migrate some of its functionality to communication controllers.
NASA Astrophysics Data System (ADS)
Wang, Mulan Xiaofeng
My dissertation concentrates on several aspects of supply chain management and economic valuation of real options in the natural gas and liquefied natural gas (LNG) industry, including gas pipeline transportations, ocean LNG shipping logistics, and downstream storage. Chapter 1 briefly introduces the natural gas and LNG industries, and the topics studied in this thesis. Chapter 2 studies how to value U.S. natural gas pipeline network transport contracts as real options. It is common for natural gas shippers to value and manage contracts by simple adaptations of financial spread option formulas that do not fully account for the implications of the capacity limits and the network structure that distinguish these contracts. In contrast, we show that these operational features can be fully captured and integrated with financial considerations in a fairly easy and managerially significant manner by a model that combines linear programming and simulation. We derive pathwise estimators for the so called deltas and structurally characterize them. We interpret them in a novel fashion as discounted expectations, under a specific weighing distribution, of the amounts of natural gas to be procured/marketed when optimally using pipeline capacity. Based on the actual prices of traded natural gas futures and basis swaps, we show that an enhanced version of the common approach employed in practice can significantly underestimate the true value of natural gas pipeline network capacity. Our model also exhibits promising financial (delta) hedging performance. Thus, this model emerges as an easy to use and useful tool that natural gas shippers can employ to support their valuation and delta hedging decisions concerning natural gas pipeline network transport capacity contracts. Moreover, the insights that follow from our data analysis have broader significance and implications in terms of the management of real options beyond our specific application. Motivated by current developments in the LNG industry, Chapter 3 studies the operations of LNG supply chains facing both supply and price risk. To model the supply uncertainty, we employ a closed-queuing-network (CQN) model to represent upstream LNG production and shipping, via special oceans-going tankers, to a downstream re-gasification facility in the U.S, which sells natural gas into the wholesale spot market. The CQN shipping model analytically generates the unloaded amount probability distribution. Price uncertainty is captured by the spot price, which experiences both volatility and significant seasonality, i.e., higher prices in winter. We use a trinomial lattice to model the price uncertainty, and calibrate to the extended forward curves. Taking the outputs from the CQN model and the spot price model as stochastic inputs, we formulate a real option inventory-release model to study the benefit of optimally managing a downstream LNG storage facility. This allows characterization of the structure of the optimal inventory management policy. An interesting finding is that when it is optimal to sell, it is not necessarily optimal to sell the entire available inventory. The model can be used by LNG players to value and manage the real option to store LNG at a re-gasification facility, and is easy to be implemented. For example, this model is particularly useful to value leasing contracts for portions of the facility capacity. Real data is used to assess the value of the real option to store LNG at the downstream re-gasification facility, and, contrary to what has been claimed by some practitioners, we find that it has significant value (several million dollars). Chapter 4 studies the importance of modeling the shipping variability when valuing and managing a downstream LNG storage facility. The shipping model presented in Chapter 3 uses a "rolling forward" method to generate the independent and identically distributed (i.i.d.) unloaded amount in each decision period. We study the merit of the i.i.d. assumption by using simulation and developing an upper bound. We show that the model, which uses the i.i.d. unloaded amount, provides a good estimation of the storage value, and yields a near optimal inventory control policy. We also test the performance of a model that uses constant throughput to determine the inventory release policy. This model performs worse than the model of Chapter 3 for storage valuation purposes, but can be used to suggest the optimal inventory control policy, especially when the ratio of flow rate to storage size is high, i.e., storage is scarce. Chapter 5 summarizes the contributions of this thesis.
Delivery arrangements for health systems in low-income countries: an overview of systematic reviews.
Ciapponi, Agustín; Lewin, Simon; Herrera, Cristian A; Opiyo, Newton; Pantoja, Tomas; Paulsen, Elizabeth; Rada, Gabriel; Wiysonge, Charles S; Bastías, Gabriel; Dudley, Lilian; Flottorp, Signe; Gagnon, Marie-Pierre; Garcia Marti, Sebastian; Glenton, Claire; Okwundu, Charles I; Peñaloza, Blanca; Suleman, Fatima; Oxman, Andrew D
2017-09-13
Delivery arrangements include changes in who receives care and when, who provides care, the working conditions of those who provide care, coordination of care amongst different providers, where care is provided, the use of information and communication technology to deliver care, and quality and safety systems. How services are delivered can have impacts on the effectiveness, efficiency and equity of health systems. This broad overview of the findings of systematic reviews can help policymakers and other stakeholders identify strategies for addressing problems and improve the delivery of services. To provide an overview of the available evidence from up-to-date systematic reviews about the effects of delivery arrangements for health systems in low-income countries. Secondary objectives include identifying needs and priorities for future evaluations and systematic reviews on delivery arrangements and informing refinements of the framework for delivery arrangements outlined in the review. We searched Health Systems Evidence in November 2010 and PDQ-Evidence up to 17 December 2016 for systematic reviews. We did not apply any date, language or publication status limitations in the searches. We included well-conducted systematic reviews of studies that assessed the effects of delivery arrangements on patient outcomes (health and health behaviours), the quality or utilisation of healthcare services, resource use, healthcare provider outcomes (such as sick leave), or social outcomes (such as poverty or employment) and that were published after April 2005. We excluded reviews with limitations important enough to compromise the reliability of the findings. Two overview authors independently screened reviews, extracted data, and assessed the certainty of evidence using GRADE. We prepared SUPPORT Summaries for eligible reviews, including key messages, 'Summary of findings' tables (using GRADE to assess the certainty of the evidence), and assessments of the relevance of findings to low-income countries. We identified 7272 systematic reviews and included 51 of them in this overview. We judged 6 of the 51 reviews to have important methodological limitations and the other 45 to have only minor limitations. We grouped delivery arrangements into eight categories. Some reviews provided more than one comparison and were in more than one category. Across these categories, the following intervention were effective; that is, they have desirable effects on at least one outcome with moderate- or high-certainty evidence and no moderate- or high-certainty evidence of undesirable effects. Who receives care and when: queuing strategies and antenatal care to groups of mothers. Who provides care: lay health workers for caring for people with hypertension, lay health workers to deliver care for mothers and children or infectious diseases, lay health workers to deliver community-based neonatal care packages, midlevel health professionals for abortion care, social support to pregnant women at risk, midwife-led care for childbearing women, non-specialist providers in mental health and neurology, and physician-nurse substitution. Coordination of care: hospital clinical pathways, case management for people living with HIV and AIDS, interactive communication between primary care doctors and specialists, hospital discharge planning, adding a service to an existing service and integrating delivery models, referral from primary to secondary care, physician-led versus nurse-led triage in emergency departments, and team midwifery. Where care is provided: high-volume institutions, home-based care (with or without multidisciplinary team) for people living with HIV and AIDS, home-based management of malaria, home care for children with acute physical conditions, community-based interventions for childhood diarrhoea and pneumonia, out-of-facility HIV and reproductive health services for youth, and decentralised HIV care. Information and communication technology: mobile phone messaging for patients with long-term illnesses, mobile phone messaging reminders for attendance at healthcare appointments, mobile phone messaging to promote adherence to antiretroviral therapy, women carrying their own case notes in pregnancy, interventions to improve childhood vaccination. Quality and safety systems: decision support with clinical information systems for people living with HIV/AIDS. Complex interventions (cutting across delivery categories and other health system arrangements): emergency obstetric referral interventions. A wide range of strategies have been evaluated for improving delivery arrangements in low-income countries, using sound systematic review methods in both Cochrane and non-Cochrane reviews. These reviews have assessed a range of outcomes. Most of the available evidence focuses on who provides care, where care is provided and coordination of care. For all the main categories of delivery arrangements, we identified gaps in primary research related to uncertainty about the applicability of the evidence to low-income countries, low- or very low-certainty evidence or a lack of studies.
NASA Astrophysics Data System (ADS)
Sarcevic, Ina; Tan, Chung-I.
2000-07-01
The Table of Contents for the full book PDF is as follows: * Preface * Monday morning session: Hadronic Final States - Conveners: E. de Wolf and J. W. Gary * Session Chairman: J. W. Gary * Inclusive Jets at the Tevatron * Forward Jets, Dijets, and Subjets at the Tevatron * Inclusive Hadron Production and Dijets at HERA * Recent Opal Results on Photon Structure and Interactions * Review of Two-Photon Physics at LEP * Session Chairman: E. de Wolf * An Intriguing Area-Law-Based Hadron Production Scheme in e+e- Annihilation and Its Possible Extensions * Hyperfine Splitting in Hadron Production at High Energies * Event Selection Effects on Multiplicities in Quark and Gluon Jets * Quark and Gluon Jet Properties at LEP * Rapidity Gaps in Quark and Gluon Jets -- A Perturbative Approach * Monday afternoon session: Diffractive and Small-x - Conveners: M. Derrick and A. White * Session Chairman: A. White * Structure Functions: Low x, High y, Low Q2 * The Next-to-Leading Dynamics of the BFKL Pomeron * Renormalization Group Improved BFKL Equation * Session Chairman: G. Briskin * New Experimental Results on Diffraction at HERA * Diffractive Parton Distributions in Light-Cone QCD * The Logarithmic Derivative of the F2 Structure Function and Saturation * Spin Dependence of Diffractive DIS * Monday evening session * Session Chairman: M. Braun * Tests of QCD with Particle Production at HERA: Review and Outlook * Double Parton Scattering and Hadron Structure in Transverse Space * The High Density Parton Dynamics from Eikonal and Dipole Pictures * Hints of Higher Twist Effects in the Slope of the Proton Structure Function * Tuesday morning session: Correlations and Fluctuations - Conveners: R. Hwa and M. Tannenbaum * Session Chairman: A. Giovannini -- Fluctuations and Correlations * Bose-Einstein Results from L3 * Short-Range and Long-Range Correlations in DIS at HERA * Coior Mutation Model, Intermittency, and Erraticity * QCD Queuing and Hadron Multiplicity * Soft and Semi-hard Components in Multiplicity Distributions in the TeV Region * Qualitative Difference Between Particle Production Dynamics in Soft and Hard Processes * Session Chairman: M. Tannenbaum -- Bose-Einstein Correlations * Questions in Bose-Einstein Correlations * The Source Size Dependence on the mhadron Applying Fermi and Bose Statistics and I-Spin Invariance * Signal of Partial UA(1) Symmetry Restoration from Two-Pion Bose-Einstein Correlations * Multiparticle Bose-Einstein Correlations in Heavy-Ion Collisions * Tuesday afternoon session: Heavy Ion Collisions - Conveners: B. Müller and J. Statchel * Session Chairman: J. Stachel * Probing Baryon Freeze-out Density at the AGS with Proton Correlations * Centrality Dependence of Hadronic Observables at CERN SPS * Study of Transverse Momentum Spectra in pp Collisions with a Statistical Model of Hadronisation * Session Chairman: B. Brower * Production of Light (Anti-)Nuclei with E864 at the AGS * QCD Critical Point in Heavy-Ion Collision Experiments * Tuesday evening session * Session Chairman: H. M. Fried * Oscillating Hq, Event Shapes, and QCD * Critical Behavior of Quark-Hadron Phase Transition * Shadowing of Gluons at RHIC and LHC * Parton Distributions in Nuclei at Small x * Wednesday morning session: Diffraction and Small x - Conveners: M. Derrick and A. White * Session Chairman: C. Pajares * High-Energy Effective Action from Scattering of Shock Waves in QCD * The Triangle Anomaly in the Triple-Regge Limit * CDF Results on Hard Diffraction and Rapidity Gap Physics * DØ Results on Hard Diffraction * Interjet Rapidity Gaps in Perturbative QCD * Pomeron: Beyond the Standard Approach * Factorization and Diffractive Production at Collider Energies * Thursday morning session: Heavy Ion Collisions - Conveners: B. Müller and J. B. Stachel * Session Chairman: N. Schmitz * Summary of J/ψ Suppression Data and Preliminary Results on Multiplicity Distributions in PB-PB Collisions from the NA50 Experiment * Duality and Chiral Restoration from Dilepton Production in Relativistic Heavy-Ion Collisions * Session Chairman: I. Sarcevic * Transport-Theoretical Analysis of Reaction Dynamics, Particle Production and Freeze-out at RHIC * Inclusive Particle Spectra and Exotic Particle Searches Using STAR * The First Fermi in a High Energy Nuclear Collision * Probing the Space-Time Evolution of Heavy Ion Collisions with Bremsstrahlung * Thursday afternoon session: Hadronic Final States - Conveners: E. de Wolf and J. Gary * Session Chairman: F. Verbeure * QCD with SLD * QCD at LEP II * Multidimensional Analysis of the Bose-Einstein Correlations at DELPHI * Study of Color Singlet with Gluonic Subsinglet by Color Effective Hamiltonian * Correlations and Fluctuations - Conveners: R. Hwa and M. Tannenbaum * Session Chairman: R. C. Hwa -- Fluctuations in Heavy-Ion Collisions * Scale-Local Statistical Measures and the Multiparticle Final State * Centrality and ET Fluctuations from p + Be to Au + Au at AGS Energies * Order Parameter of Single Event * Multiplicities, Transverse Momenta and Their Correlations from Percolating Colour Strings * Probing the QCD Critical Point in Nuclear Collisions * Event-by-Event Fluctuations in Pb + Pb Collisions at the CERN SPS * Friday morning session: High Energy Collisions and Cosmic-Ray/Astrophysics - Conveners: F. Halzen and T. Stanev * Session Chairman: U. Sukhatme * Rethinking the Eikonal Approximation * QCD and Total Cross-Sections * The Role of Multiple Parton Collisions in Hadron Collisions * Effective Cross Sections and Spatial Structure of the Hadrons * Looking for the Odderon * QCD in Embedded Coordinates * Session Chairman: F. Bopp * Extensive Air Sbowers and Hadronic Interaction Models * Penetration of the Earth by Ultrahigh Energy Neutrinos and the Parton Distributions Inside the Nucleon * Comparison of Prompt Muon Observations to Charm Expectations * Friday afternoon session: Recent Developments - Conveners: R. Brower and I. Sarcevic * Session Chairman: G. Guralnik * The Relation Between Gauge Theories and Gravity * From Black Holes to Pomeron: Tensor Glueball and Pomeron Intercept at Strong Coupling * Summary Talks * Summary of Results of the Ultrarelativistic Heavy Ion Fixed Target Program * Review of Theory Talks * Summary of Experimental Talks * List of Participants