Sample records for packet scheduling algorithm

  1. Time-optimum packet scheduling for many-to-one routing in wireless sensor networks

    USGS Publications Warehouse

    Song, W.-Z.; Yuan, F.; LaHuser, R.

    2007-01-01

    This paper studies the WSN application scenario with periodical traffic from all sensors to a sink. We present a time-optimum and energy-efficient packet scheduling algorithm and its distributed implementation. We first give a general many-to-one packet scheduling algorithm for wireless networks, and then prove that it is time-optimum and costs max(2N(u1) - 1, N(u 0) -1) time slots, assuming each node reports one unit of data in each round. Here N(u0) is the total number of sensors, while N(u 1) denotes the number of sensors in a sink's largest branch subtree. With a few adjustments, we then show that our algorithm also achieves time-optimum scheduling in heterogeneous scenarios, where each sensor reports a heterogeneous amount of data in each round. Then we give a distributed implementation to let each node calculate its duty-cycle locally and maximize efficiency globally. In this packet scheduling algorithm, each node goes to sleep whenever it is not transceiving, so that the energy waste of idle listening is also eliminated. Finally, simulations are conducted to evaluate network performance using the Qualnet simulator. Among other contributions, our study also identifies the maximum reporting frequency that a deployed sensor network can handle. ??2006 IEEE.

  2. Time-optimum packet scheduling for many-to-one routing in wireless sensor networks

    USGS Publications Warehouse

    Song, W.-Z.; Yuan, F.; LaHusen, R.; Shirazi, B.

    2007-01-01

    This paper studies the wireless sensor networks (WSN) application scenario with periodical traffic from all sensors to a sink. We present a time-optimum and energy-efficient packet scheduling algorithm and its distributed implementation. We first give a general many-to-one packet scheduling algorithm for wireless networks, and then prove that it is time-optimum and costs [image omitted], N(u0)-1) time slots, assuming each node reports one unit of data in each round. Here [image omitted] is the total number of sensors, while [image omitted] denotes the number of sensors in a sink's largest branch subtree. With a few adjustments, we then show that our algorithm also achieves time-optimum scheduling in heterogeneous scenarios, where each sensor reports a heterogeneous amount of data in each round. Then we give a distributed implementation to let each node calculate its duty-cycle locally and maximize efficiency globally. In this packet-scheduling algorithm, each node goes to sleep whenever it is not transceiving, so that the energy waste of idle listening is also mitigated. Finally, simulations are conducted to evaluate network performance using the Qualnet simulator. Among other contributions, our study also identifies the maximum reporting frequency that a deployed sensor network can handle.

  3. Uplink Packet-Data Scheduling in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Choi, Young Woo; Kim, Seong-Lyun

    In this letter, we consider the uplink packet scheduling for non-real-time data users in a DS-CDMA system. As an effort to jointly optimize throughput and fairness, we formulate a time-span minimization problem incorporating the time-multiplexing of different simultaneous transmission schemes. Based on simple rules, we propose efficient scheduling algorithms and compare them with the optimal solution obtained by linear programming.

  4. Data transmission system and method

    NASA Technical Reports Server (NTRS)

    Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)

    2010-01-01

    A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.

  5. New Scheduling Algorithms for Agile All-Photonic Networks

    NASA Astrophysics Data System (ADS)

    Mehri, Mohammad Saleh; Ghaffarpour Rahbar, Akbar

    2017-12-01

    An optical overlaid star network is a class of agile all-photonic networks that consists of one or more core node(s) at the center of the star network and a number of edge nodes around the core node. In this architecture, a core node may use a scheduling algorithm for transmission of traffic through the network. A core node is responsible for scheduling optical packets that arrive from edge nodes and switching them toward their destinations. Nowadays, most edge nodes use virtual output queue (VOQ) architecture for buffering client packets to achieve high throughput. This paper presents two efficient scheduling algorithms called discretionary iterative matching (DIM) and adaptive DIM. These schedulers find maximum matching in a small number of iterations and provide high throughput and incur low delay. The number of arbiters in these schedulers and the number of messages exchanged between inputs and outputs of a core node are reduced. We show that DIM and adaptive DIM can provide better performance in comparison with iterative round-robin matching with SLIP (iSLIP). SLIP means the act of sliding for a short distance to select one of the requested connections based on the scheduling algorithm.

  6. Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels

    NASA Astrophysics Data System (ADS)

    Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang

    In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.

  7. Exploring a QoS Driven Scheduling Approach for Peer-to-Peer Live Streaming Systems with Network Coding

    PubMed Central

    Cui, Laizhong; Lu, Nan; Chen, Fu

    2014-01-01

    Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968

  8. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  9. A novel downlink scheduling strategy for traffic communication system based on TD-LTE technology.

    PubMed

    Chen, Ting; Zhao, Xiangmo; Gao, Tao; Zhang, Licheng

    2016-01-01

    There are many existing classical scheduling algorithms which can obtain better system throughput and user equality, however, they are not designed for traffic transportation environment, which cannot consider whether the transmission performance of various information flows could meet comprehensive requirements of traffic safety and delay tolerance. This paper proposes a novel downlink scheduling strategy for traffic communication system based on TD-LTE technology, which can perform two classification mappings for various information flows in the eNodeB: firstly, associate every information flow packet with traffic safety importance weight according to its relevance to the traffic safety; secondly, associate every traffic information flow with service type importance weight according to its quality of service (QoS) requirements. Once the connection is established, at every scheduling moment, scheduler would decide the scheduling order of all buffers' head of line packets periodically according to the instant value of scheduling importance weight function, which calculated by the proposed algorithm. From different scenario simulations, it can be verified that the proposed algorithm can provide superior differentiated transmission service and reliable QoS guarantee to information flows with different traffic safety levels and service types, which is more suitable for traffic transportation environment compared with the existing popularity PF algorithm. With the limited wireless resource, information flow closed related to traffic safety will always obtain priority scheduling right timely, which can help the passengers' journey more safe. Moreover, the proposed algorithm cannot only obtain good flow throughput and user fairness which are almost equal to those of the PF algorithm without significant differences, but also provide better realtime transmission guarantee to realtime information flow.

  10. Greedy data transportation scheme with hard packet deadlines for wireless ad hoc networks.

    PubMed

    Lee, HyungJune

    2014-01-01

    We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services.

  11. Greedy Data Transportation Scheme with Hard Packet Deadlines for Wireless Ad Hoc Networks

    PubMed Central

    Lee, HyungJune

    2014-01-01

    We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services. PMID:25258736

  12. Wireless Sensor Network Metrics for Real-Time Systems

    DTIC Science & Technology

    2009-05-20

    to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate

  13. Call Admission Control on Single Node Networks under Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) Scheduler

    NASA Astrophysics Data System (ADS)

    Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi

    Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.

  14. Energy-saving scheme based on downstream packet scheduling in ethernet passive optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Lincong; Liu, Yejun; Guo, Lei; Gong, Xiaoxue

    2013-03-01

    With increasing network sizes, the energy consumption of Passive Optical Networks (PONs) has grown significantly. Therefore, it is important to design effective energy-saving schemes in PONs. Generally, energy-saving schemes have focused on sleeping the low-loaded Optical Network Units (ONUs), which tends to bring large packet delays. Further, the traditional ONU sleep modes are not capable of sleeping the transmitter and receiver independently, though they are not required to transmit or receive packets. Clearly, this approach contributes to wasted energy. Thus, in this paper, we propose an Energy-Saving scheme that is based on downstream Packet Scheduling (ESPS) in Ethernet PON (EPON). First, we design both an algorithm and a rule for downstream packet scheduling at the inter- and intra-ONU levels, respectively, to reduce the downstream packet delay. After that, we propose a hybrid sleep mode that contains not only ONU deep sleep mode but also independent sleep modes for the transmitter and the receiver. This ensures that the energy consumed by the ONUs is minimal. To realize the hybrid sleep mode, a modified GATE control message is designed that involves 10 time points for sleep processes. In ESPS, the 10 time points are calculated according to the allocated bandwidths in both the upstream and the downstream. The simulation results show that ESPS outperforms traditional Upstream Centric Scheduling (UCS) scheme in terms of energy consumption and the average delay for both real-time and non-real-time packets downstream. The simulation results also show that the average energy consumption of each ONU in larger-sized networks is less than that in smaller-sized networks; hence, our ESPS is better suited for larger-sized networks.

  15. Avoiding Biased-Feeding in the Scheduling of Collaborative Multipath TCP.

    PubMed

    Tsai, Meng-Hsun; Chou, Chien-Ming; Lan, Kun-Chan

    2016-01-01

    Smartphones have become the major communication and portable computing devices that access the Internet through Wi-Fi or mobile networks. Unfortunately, users without a mobile data subscription can only access the Internet at limited locations, such as hotspots. In this paper, we propose a collaborative bandwidth sharing protocol (CBSP) built on top of MultiPath TCP (MPTCP). CBSP enables users to buy bandwidth on demand from neighbors (called Helpers) and uses virtual interfaces to bind the subflows of MPTCP to avoid modifying the implementation of MPTCP. However, although MPTCP provides the required multi-homing functionality for bandwidth sharing, the current packet scheduling in collaborative MPTCP (e.g., Co-MPTCP) leads to the so-called biased-feeding problem. In this problem, the fastest link might always be selected to send packets whenever it has available cwnd, which results in other links not being fully utilized. In this work, we set out to design an algorithm, called Scheduled Window-based Transmission Control (SWTC), to improve the performance of packet scheduling in MPTCP, and we perform extensive simulations to evaluate its performance.

  16. Avoiding Biased-Feeding in the Scheduling of Collaborative Multipath TCP

    PubMed Central

    2016-01-01

    Smartphones have become the major communication and portable computing devices that access the Internet through Wi-Fi or mobile networks. Unfortunately, users without a mobile data subscription can only access the Internet at limited locations, such as hotspots. In this paper, we propose a collaborative bandwidth sharing protocol (CBSP) built on top of MultiPath TCP (MPTCP). CBSP enables users to buy bandwidth on demand from neighbors (called Helpers) and uses virtual interfaces to bind the subflows of MPTCP to avoid modifying the implementation of MPTCP. However, although MPTCP provides the required multi-homing functionality for bandwidth sharing, the current packet scheduling in collaborative MPTCP (e.g., Co-MPTCP) leads to the so-called biased-feeding problem. In this problem, the fastest link might always be selected to send packets whenever it has available cwnd, which results in other links not being fully utilized. In this work, we set out to design an algorithm, called Scheduled Window-based Transmission Control (SWTC), to improve the performance of packet scheduling in MPTCP, and we perform extensive simulations to evaluate its performance. PMID:27529783

  17. A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks.

    PubMed

    Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung

    2016-01-05

    Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR.

  18. A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks

    PubMed Central

    Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung

    2016-01-01

    Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR. PMID:26742046

  19. Quality of service routing in wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh

    2003-08-01

    An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.

  20. Traffic shaping and scheduling for OBS-based IP/WDM backbones

    NASA Astrophysics Data System (ADS)

    Elhaddad, Mahmoud S.; Melhem, Rami G.; Znati, Taieb; Basak, Debashis

    2003-10-01

    We introduce Proactive Reservation-based Switching (PRS) -- a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

  1. Performance and policy dimensions in internet routing

    NASA Technical Reports Server (NTRS)

    Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.; Thyagarajan, Ajit

    1995-01-01

    The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary.

  2. Adaptive Subframe Partitioning and Efficient Packet Scheduling in OFDMA Cellular System with Fixed Decode-and-Forward Relays

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Ji, Yusheng; Liu, Fuqiang

    The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.

  3. Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH

    PubMed Central

    Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok

    2018-01-01

    Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology. PMID:29659508

  4. Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH.

    PubMed

    Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok

    2018-04-16

    Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology.

  5. Research on a Queue Scheduling Algorithm in Wireless Communications Network

    NASA Astrophysics Data System (ADS)

    Yang, Wenchuan; Hu, Yuanmei; Zhou, Qiancai

    This paper proposes a protocol QS-CT, Queue Scheduling Mechanism based on Multiple Access in Ad hoc net work, which adds queue scheduling mechanism to RTS-CTS-DATA using multiple access protocol. By endowing different queues different scheduling mechanisms, it makes networks access to the channel much more fairly and effectively, and greatly enhances the performance. In order to observe the final performance of the network with QS-CT protocol, we simulate it and compare it with MACA/C-T without QS-CT protocol. Contrast to MACA/C-T, the simulation result shows that QS-CT has greatly improved the throughput, delay, rate of packets' loss and other key indicators.

  6. Design of an All-Optical Network Based on LCoS Technologies

    NASA Astrophysics Data System (ADS)

    Cheng, Yuh-Jiuh; Shiau, Yhi

    2016-06-01

    In this paper, an all-optical network composed of the ROADMs (reconfigurable optical add-drop multiplexer), L2/L3 optical packet switches, and the fiber optical cross-connection for fiber scheduling and measurement based on LCoS (liquid crystal on silicon) technologies is proposed. The L2/L3 optical packet switches are designed with optical output buffers. Only the header of optical packets is converted to electronic signals to control the wavelength of input ports and the packet payloads can be transparently destined to their output ports. An optical output buffer is designed to queue the packets when more than one incoming packet should reach to the same destination output port. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The wavelength of input ports is designed for routing incoming packets using LCoS technologies. Finally, the proposed OFS (optical flow switch) with input buffers can quickly transfer the big data to the output ports and the main purpose of the OFS is to reduce the number of wavelength reflections. The all-optical content delivery network is comprised of the OFSs for a large amount of audio and video data transmissions in the future.

  7. A novel LTE scheduling algorithm for green technology in smart grid.

    PubMed

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively.

  8. A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid

    PubMed Central

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  9. MoNET: media over net gateway processor for next-generation network

    NASA Astrophysics Data System (ADS)

    Elabd, Hammam; Sundar, Rangarajan; Dedes, John

    2001-12-01

    MoNETTM (Media over Net) SX000 product family is designed using a scalable voice, video and packet-processing platform to address applications with channel densities from few voice channels to four OC3 per card. This platform is developed for bridging public circuit-switched network to the next generation packet telephony and data network. The platform consists of a DSP farm, RISC processors and interface modules. DSP farm is required to execute voice compression, image compression and line echo cancellation algorithms for large number of voice, video, fax, and modem or data channels. RISC CPUs are used for performing various packetizations based on RTP, UDP/IP and ATM encapsulations. In addition, RISC CPUs also participate in the DSP farm load management and communication with the host and other MoP devices. The MoNETTM S1000 communications device is designed for voice processing and for bridging TDM to ATM and IP packet networks. The S1000 consists of the DSP farm based on Carmel DSP core and 32-bit RISC CPU, along with Ethernet, Utopia, PCI, and TDM interfaces. In this paper, we will describe the VoIP infrastructure, building blocks of the S500, S1000 and S3000 devices, algorithms executed on these device and associated channel densities, detailed DSP architecture, memory architecture, data flow and scheduling.

  10. TinyOS-based quality of service management in wireless sensor networks

    USGS Publications Warehouse

    Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.

    2009-01-01

    Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.

  11. PHY-DLL dialogue: cross-layer design for optical-wireless OFDM downlink transmission

    NASA Astrophysics Data System (ADS)

    Wang, Xuguo; Li, Lee

    2005-11-01

    The use of radio over fiber to provide radio access has a number of advantages including the ability to deploy small, low-cost remote antenna units and ease of upgrade. And due to the great potential for increasing the capacity and quality of service, the combination of Orthogonal Frequency Division Multiplexing (OFDM) modulation and the sub-carrier multiplexed optical transmission is one of the best solutions for the future millimeter-wave mobile communication. And this makes the optimum utility of valuable radio resources essential. This paper devises a cross-layer adaptive algorithm for optical-wireless OFDM system, which takes into consideration not only transmission power limitation in the physical layer, but also traffic scheduling and user fairness at the data-link layer. According to proportional fairness principle and water-pouring theorem, we put forward the complete description of this cross-layer adaptive downlink transmission 6-step algorithm. Simulation results show that the proposed cross-layer algorithm outperforms the mere physical layer adaptive algorithm markedly. The novel scheme is able to improve performance of the packet success rate per time chip and average packet delay, support added active users.

  12. Energy Efficient Cluster Based Scheduling Scheme for Wireless Sensor Networks

    PubMed Central

    Srie Vidhya Janani, E.; Ganesh Kumar, P.

    2015-01-01

    The energy utilization of sensor nodes in large scale wireless sensor network points out the crucial need for scalable and energy efficient clustering protocols. Since sensor nodes usually operate on batteries, the maximum utility of network is greatly dependent on ideal usage of energy leftover in these sensor nodes. In this paper, we propose an Energy Efficient Cluster Based Scheduling Scheme for wireless sensor networks that balances the sensor network lifetime and energy efficiency. In the first phase of our proposed scheme, cluster topology is discovered and cluster head is chosen based on remaining energy level. The cluster head monitors the network energy threshold value to identify the energy drain rate of all its cluster members. In the second phase, scheduling algorithm is presented to allocate time slots to cluster member data packets. Here congestion occurrence is totally avoided. In the third phase, energy consumption model is proposed to maintain maximum residual energy level across the network. Moreover, we also propose a new packet format which is given to all cluster member nodes. The simulation results prove that the proposed scheme greatly contributes to maximum network lifetime, high energy, reduced overhead, and maximum delivery ratio. PMID:26495417

  13. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  14. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.

    PubMed

    Bi, Xia-An; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.

  15. Hierarchical trie packet classification algorithm based on expectation-maximization clustering

    PubMed Central

    Bi, Xia-an; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476

  16. Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks

    PubMed Central

    Tham, Mau-Luen

    2016-01-01

    This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398

  17. Link Scheduling Algorithm with Interference Prediction for Multiple Mobile WBANs

    PubMed Central

    Le, Thien T. T.

    2017-01-01

    As wireless body area networks (WBANs) become a key element in electronic healthcare (e-healthcare) systems, the coexistence of multiple mobile WBANs is becoming an issue. The network performance is negatively affected by the unpredictable movement of the human body. In such an environment, inter-WBAN interference can be caused by the overlapping transmission range of nearby WBANs. We propose a link scheduling algorithm with interference prediction (LSIP) for multiple mobile WBANs, which allows multiple mobile WBANs to transmit at the same time without causing inter-WBAN interference. In the LSIP, a superframe includes the contention access phase using carrier sense multiple access with collision avoidance (CSMA/CA) and the scheduled phase using time division multiple access (TDMA) for non-interfering nodes and interfering nodes, respectively. For interference prediction, we define a parameter called interference duration as the duration during which disparate WBANs interfere with each other. The Bayesian model is used to estimate and classify the interference using a signal to interference plus noise ratio (SINR) and the number of neighboring WBANs. The simulation results show that the proposed LSIP algorithm improves the packet delivery ratio and throughput significantly with acceptable delay. PMID:28956827

  18. Recovery Act: Energy Efficiency of Data Networks through Rate Adaptation (EEDNRA) - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Andrews; Spyridon Antonakopoulos; Steve Fortune

    2011-07-12

    This Concept Definition Study focused on developing a scientific understanding of methods to reduce energy consumption in data networks using rate adaptation. Rate adaptation is a collection of techniques that reduce energy consumption when traffic is light, and only require full energy when traffic is at full provisioned capacity. Rate adaptation is a very promising technique for saving energy: modern data networks are typically operated at average rates well below capacity, but network equipment has not yet been designed to incorporate rate adaptation. The Study concerns packet-switching equipment, routers and switches; such equipment forms the backbone of the modern Internet.more » The focus of the study is on algorithms and protocols that can be implemented in software or firmware to exploit hardware power-control mechanisms. Hardware power-control mechanisms are widely used in the computer industry, and are beginning to be available for networking equipment as well. Network equipment has different performance requirements than computer equipment because of the very fast rate of packet arrival; hence novel power-control algorithms are required for networking. This study resulted in five published papers, one internal report, and two patent applications, documented below. The specific technical accomplishments are the following: • A model for the power consumption of switching equipment used in service-provider telecommunication networks as a function of operating state, and measured power-consumption values for typical current equipment. • An algorithm for use in a router that adapts packet processing rate and hence power consumption to traffic load while maintaining performance guarantees on delay and throughput. • An algorithm that performs network-wide traffic routing with the objective of minimizing energy consumption, assuming that routers have less-than-ideal rate adaptivity. • An estimate of the potential energy savings in service-provider networks using feasibly-implementable rate adaptivity. • A buffer-management algorithm that is designed to reduce the size of router buffers, and hence energy consumed. • A packet-scheduling algorithm designed to minimize packet-processing energy requirements. Additional research is recommended in at least two areas: further exploration of rate-adaptation in network switching equipment, including incorporation of rate-adaptation in actual hardware, allowing experimentation in operational networks; and development of control protocols that allow parts of networks to be shut down while minimizing disruption to traffic flow in the network. The research is an integral part of a large effort within Bell Laboratories, Alcatel-Lucent, aimed at dramatic improvements in the energy efficiency of telecommunication networks. This Study did not explicitly consider any commercialization opportunities.« less

  19. An Efficient Conflict Detection Algorithm for Packet Filters

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung

    Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.

  20. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection

    PubMed Central

    Chen, Yaw-Chung

    2015-01-01

    The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms. PMID:26437335

  1. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.

    PubMed

    Lee, Chun-Liang; Lin, Yi-Shan; Chen, Yaw-Chung

    2015-01-01

    The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.

  2. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  3. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  4. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  5. Investigation of Inter-Node B Macro Diversity for Single-Carrier Based Radio Access in Evolved UTRA Uplink

    NASA Astrophysics Data System (ADS)

    Kawai, Hiroyuki; Morimoto, Akihito; Higuchi, Kenichi; Sawahashi, Mamoru

    This paper investigates the gain of inter-Node B macro diversity for a scheduled-based shared channel using single-carrier FDMA radio access in the Evolved UTRA (UMTS Terrestrial Radio Access) uplink based on system-level simulations. More specifically, we clarify the gain of inter-Node B soft handover (SHO) with selection combining at the radio frame length level (=10msec) compared to that for hard handover (HHO) for a scheduled-based shared data channel, considering the gains of key packet-specific techniques including channel-dependent scheduling, adaptive modulation and coding (AMC), hybrid automatic repeat request (ARQ) with packet combining, and slow transmission power control (TPC). Simulation results show that the inter-Node B SHO increases the user throughput at the cell edge by approximately 10% for a short cell radius such as 100-300m due to the diversity gain from a sudden change in other-cell interference, which is a feature specific to full scheduled-based packet access. However, it is also shown that the gain of inter-Node B SHO compared to that for HHO is small in a macrocell environment when the cell radius is longer than approximately 500m due to the gains from hybrid ARQ with packet combining, slow TPC, and proportional fairness based channel-dependent scheduling.

  6. An End-to-End Loss Discrimination Scheme for Multimedia Transmission over Wireless IP Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Hai-Tao; Dong, Yu-Ning; Li, Yang

    As the rapid growth of wireless IP networks, wireless IP access networks have a lot of potential applications in a variety of fields in civilian and military environments. Many of these applications, such as realtime audio/video streaming, will require some form of end-to-end QoS assurance. In this paper, an algorithm WMPLD (Wireless Multimedia Packet Loss Discrimination) is proposed for multimedia transmission control over wired-wireless hybrid IP networks. The relationship between packet length and packet loss rate in the Gilbert wireless error model is investigated. Furthermore, the algorithm can detect the nature of packet losses by sending large and small packets alternately, and control the sending rate of nodes. In addition, by means of updating factor K, this algorithm can adapt to the changes of network states quickly. Simulation results show that, compared to previous algorithms, WMPLD algorithm can improve the networks throughput as well as reduce the congestion loss rate in various situations.

  7. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    In this thesis, we analyze various factors that affect quality of service (QoS) communication in high-speed, packet-switching sub-networks. We hypothesize that sub-network-wide bandwidth reservation and guaranteed CPU processing power at endpoint systems for handling data traffic are indispensable to achieving hard end-to-end quality of service. Different bandwidth reservation strategies, traffic characterization schemes, and scheduling algorithms affect the network resources and CPU usage as well as the extent that QoS can be achieved. In order to analyze those factors, we design and implement a communication layer. Our experimental analysis supports our research hypothesis. The Resource ReSerVation Protocol (RSVP) is designed to realize resource reservation. Our analysis of RSVP shows that using RSVP solely is insufficient to provide hard end-to-end quality of service in a high-speed sub-network. Analysis of the IEEE 802.lp protocol also supports the research hypothesis.

  8. Environmental Fluctuations and Acoustic Data Communications

    DTIC Science & Technology

    2015-09-30

    July 2011 along with subsequent analysis of the experiment data. KAM11 Experiment (2011) A shallow water acoustic communications experiment...packet and packet-to-packet variability. Algorithm Design and Experiment Data Analysis Communication receiver algorithm design for shallow water is...exhibited substantial daily oceanographic variability. Analysis of the KAM11 experiment data this past year has focused on fixed source transmissions

  9. An Energy-Efficient Underground Localization System Based on Heterogeneous Wireless Networks

    PubMed Central

    Yuan, Yazhou; Chen, Cailian; Guan, Xinping; Yang, Qiuling

    2015-01-01

    A precision positioning system with energy efficiency is of great necessity for guaranteeing personnel safety in underground mines. The location information of the miners' should be transmitted to the control center timely and reliably; therefore, a heterogeneous network with the backbone based on high speed Industrial Ethernet is deployed. Since the mobile wireless nodes are working in an irregular tunnel, a specific wireless propagation model cannot fit all situations. In this paper, an underground localization system is designed to enable the adaptation to kinds of harsh tunnel environments, but also to reduce the energy consumption and thus prolong the lifetime of the network. Three key techniques are developed and implemented to improve the system performance, including a step counting algorithm with accelerometers, a power control algorithm and an adaptive packets scheduling scheme. The simulation study and experimental results show the effectiveness of the proposed algorithms and the implementation. PMID:26016918

  10. Next generation communications satellites: multiple access and network studies

    NASA Technical Reports Server (NTRS)

    Meadows, H. E.; Schwartz, M.; Stern, T. E.; Ganguly, S.; Kraimeche, B.; Matsuo, K.; Gopal, I.

    1982-01-01

    Efficient resource allocation and network design for satellite systems serving heterogeneous user populations with large numbers of small direct-to-user Earth stations are discussed. Focus is on TDMA systems involving a high degree of frequency reuse by means of satellite-switched multiple beams (SSMB) with varying degrees of onboard processing. Algorithms for the efficient utilization of the satellite resources were developed. The effect of skewed traffic, overlapping beams and batched arrivals in packet-switched SSMB systems, integration of stream and bursty traffic, and optimal circuit scheduling in SSMB systems: performance bounds and computational complexity are discussed.

  11. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  12. Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Feng

    2018-03-01

    Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.

  13. Composable Flexible Real-time Packet Scheduling for Networks on-Chip

    DTIC Science & Technology

    2012-05-16

    unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Copyright © 2012...words, real-time flows need to be composable. We set this as the design goal for our packet scheduling discipline developed in this paper. B . Motivating...with closest deadline is chosen to forward to the next router. B . Traffic Model We assume a traffic model for real-time flows similar to the one used

  14. A Rate-Based Congestion Control Algorithm for the SURAP 4 Packet Radio Architecture (SRNTN-72)

    DTIC Science & Technology

    1990-01-01

    factor, one packet. as at connection initialization. However. these TCP enhancements do not solve the fairness problem. The slow start algorithm ma...and signal interference (including lamming) and by the delavs demanded b% the hink -layer protocols in the absence of contention for resources at the...values. This role would be redundant if bits-per-second rations used measurements of packet duration to determine how fast to decrease, at the expense of

  15. Detecting Anomalies in Process Control Networks

    NASA Astrophysics Data System (ADS)

    Rrushi, Julian; Kang, Kyoung-Don

    This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.

  16. Back pressure based multicast scheduling for fair bandwidth allocation.

    PubMed

    Sarkar, Saswati; Tassiulas, Leandros

    2005-09-01

    We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.

  17. The Long Life Ration Packet (LLRP)

    DTIC Science & Technology

    1991-02-18

    entree, a cereal bar, a cookie component, a candy component, an instant beverage and an accessory packet. It weighs less than one pound for an...which may be eaten as is or rapidly reconstituted with either hot or cold water; other low moisture ready-to-eat foods and several instant beverage... instant beverage and accessory packet. 0 Field testing of prototype LRP scheduled for 4Q90 using Fort Drum IRP troops. 0 In-Process Review for acceptance of

  18. Analysis of adaptive algorithms for an integrated communication network

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim

    1985-01-01

    Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.

  19. A novel fair active queue management algorithm based on traffic delay jitter

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  20. Providing end-to-end QoS for multimedia applications in 3G wireless networks

    NASA Astrophysics Data System (ADS)

    Guo, Katherine; Rangarajan, Samapth; Siddiqui, M. A.; Paul, Sanjoy

    2003-11-01

    As the usage of wireless packet data services increases, wireless carriers today are faced with the challenge of offering multimedia applications with QoS requirements within current 3G data networks. End-to-end QoS requires support at the application, network, link and medium access control (MAC) layers. We discuss existing CDMA2000 network architecture and show its shortcomings that prevent supporting multiple classes of traffic at the Radio Access Network (RAN). We then propose changes in RAN within the standards framework that enable support for multiple traffic classes. In addition, we discuss how Session Initiation Protocol (SIP) can be augmented with QoS signaling for supporting end-to-end QoS. We also review state of the art scheduling algorithms at the base station and provide possible extensions to these algorithms to support different classes of traffic as well as different classes of users.

  1. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    NASA Astrophysics Data System (ADS)

    Gao, Qian

    For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.

  2. Energy-saving EPON Bandwidth Allocation Algorithm Supporting ONU's Sleep Mode

    NASA Astrophysics Data System (ADS)

    Zhang, Yinfa; Ren, Shuai; Liao, Xiaomin; Fang, Yuanyuan

    2014-09-01

    A new bandwidth allocation algorithm was presented by combining merits of the IPACT algorithm and the cyclic DBA algorithm based on the DBA algorithm for ONU's sleep mode. Simulation results indicate that compared with the normal mode ONU, the ONU's sleep mode can save about 74% of energy. The new algorithm has a smaller average packet delay and queue length in the upstream direction. While in the downstream direction, the average packet delay of the new algorithm is less than polling cycle Tcycle and the average queue length is less than the product of Tcycle and the maximum link rate. The new algorithm achieves a better compromise between energy-saving and ensuring quality of service.

  3. Highly Efficient Multi Channel Packet Forwarding with Round Robin Intermittent Periodic Transmit for Multihop Wireless Backhaul Networks

    PubMed Central

    Furukawa, Hiroshi

    2017-01-01

    Round Robin based Intermittent Periodic Transmit (RR-IPT) has been proposed which achieves highly efficient multi-hop relays in multi-hop wireless backhaul networks (MWBN) where relay nodes are 2-dimensionally deployed. This paper newly investigates multi-channel packet scheduling and forwarding scheme for RR-IPT. Downlink traffic is forwarded by RR-IPT via one of the channels, while uplink traffic and part of downlink are accommodated in the other channel. By comparing IPT and carrier sense multiple access with collision avoidance (CSMA/CA) for uplink/downlink packet forwarding channel, IPT is more effective in reducing packet loss rate whereas CSMA/CA is better in terms of system throughput and packet delay improvement. PMID:29137164

  4. QoS-Oriented High Dynamic Resource Allocation in Vehicular Communication Networks

    PubMed Central

    2014-01-01

    Vehicular ad hoc networks (VANETs) are emerging as new research area and attracting an increasing attention from both industry and research communities. In this context, a dynamic resource allocation policy that maximizes the use of available resources and meets the quality of service (QoS) requirement of constraining applications is proposed. It is a combination of a fair packet scheduling policy and a new adaptive QoS oriented call admission control (CAC) scheme based on the vehicle density variation. This scheme decides whether the connection request is to be admitted into the system, while providing fair access and guaranteeing the desired throughput. The proposed algorithm showed good performance in testing in real world environment. PMID:24616639

  5. Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks

    PubMed Central

    Mahjoub, Reem K.; Elleithy, Khaled

    2017-01-01

    The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation. PMID:28420102

  6. Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks.

    PubMed

    Mahjoub, Reem K; Elleithy, Khaled

    2017-04-14

    The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation.

  7. A distributed geo-routing algorithm for wireless sensor networks.

    PubMed

    Joshi, Gyanendra Prasad; Kim, Sung Won

    2009-01-01

    Geographic wireless sensor networks use position information for greedy routing. Greedy routing works well in dense networks, whereas in sparse networks it may fail and require a recovery algorithm. Recovery algorithms help the packet to get out of the communication void. However, these algorithms are generally costly for resource constrained position-based wireless sensor networks (WSNs). In this paper, we propose a void avoidance algorithm (VAA), a novel idea based on upgrading virtual distance. VAA allows wireless sensor nodes to remove all stuck nodes by transforming the routing graph and forwarding packets using only greedy routing. In VAA, the stuck node upgrades distance unless it finds a next hop node that is closer to the destination than it is. VAA guarantees packet delivery if there is a topologically valid path. Further, it is completely distributed, immediately responds to node failure or topology changes and does not require planarization of the network. NS-2 is used to evaluate the performance and correctness of VAA and we compare its performance to other protocols. Simulations show our proposed algorithm consumes less energy, has an efficient path and substantially less control overheads.

  8. Modified weighted fair queuing for packet scheduling in mobile WiMAX networks

    NASA Astrophysics Data System (ADS)

    Satrya, Gandeva B.; Brotoharsono, Tri

    2013-03-01

    The increase of user mobility and the need for data access anytime also increases the interest in broadband wireless access (BWA). The best available quality of experience for mobile data service users are assured for IEEE 802.16e based users. The main problem of assuring a high QOS value is how to allocate available resources among users in order to meet the QOS requirement for criteria such as delay, throughput, packet loss and fairness. There is no specific standard scheduling mechanism stated by IEEE standards, which leaves it for implementer differentiation. There are five QOS service classes defined by IEEE 802.16: Unsolicited Grant Scheme (UGS), Extended Real Time Polling Service (ertPS), Real Time Polling Service (rtPS), Non Real Time Polling Service (nrtPS) and Best Effort Service (BE). Each class has different QOS parameter requirements for throughput and delay/jitter constraints. This paper proposes Modified Weighted Fair Queuing (MWFQ) scheduling scenario which was based on Weighted Round Robin (WRR) and Weighted Fair Queuing (WFQ). The performance of MWFQ was assessed by using above five QoS criteria. The simulation shows that using the concept of total packet size calculation improves the network's performance.

  9. Experimental evaluation of the impact of packet capturing tools for web services.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choe, Yung Ryn; Mohapatra, Prasant; Chuah, Chen-Nee

    Network measurement is a discipline that provides the techniques to collect data that are fundamental to many branches of computer science. While many capturing tools and comparisons have made available in the literature and elsewhere, the impact of these packet capturing tools on existing processes have not been thoroughly studied. While not a concern for collection methods in which dedicated servers are used, many usage scenarios of packet capturing now requires the packet capturing tool to run concurrently with operational processes. In this work we perform experimental evaluations of the performance impact that packet capturing process have on web-based services;more » in particular, we observe the impact on web servers. We find that packet capturing processes indeed impact the performance of web servers, but on a multi-core system the impact varies depending on whether the packet capturing and web hosting processes are co-located or not. In addition, the architecture and behavior of the web server and process scheduling is coupled with the behavior of the packet capturing process, which in turn also affect the web server's performance.« less

  10. The Impact of Utilizing a Flexible Work Schedule on the Perceived Career Advancement Potential of Women

    ERIC Educational Resources Information Center

    Rogier, Sara A.; Padgett, Margaret Y.

    2004-01-01

    This study examined whether a woman working a flexible schedule would be perceived as having less career advancement potential than a woman on a regular schedule. Participants reviewed a packet of materials simulating the personnel file of a female employee in an accounting firm who was seeking promotion from manager to senior manager. Results…

  11. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    PubMed Central

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-01-01

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network. PMID:27754405

  12. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks.

    PubMed

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-10-14

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.

  13. Distributed Optimal Dispatch of Distributed Energy Resources Over Lossy Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Junfeng; Yang, Tao; Wu, Di

    In this paper, we consider the economic dispatch problem (EDP), where a cost function that is assumed to be strictly convex is assigned to each of distributed energy resources (DERs), over packet dropping networks. The goal of a standard EDP is to minimize the total generation cost while meeting total demand and satisfying individual generator output limit. We propose a distributed algorithm for solving the EDP over networks. The proposed algorithm is resilient against packet drops over communication links. Under the assumption that the underlying communication network is strongly connected with a positive probability and the packet drops are independentmore » and identically distributed (i.i.d.), we show that the proposed algorithm is able to solve the EDP. Numerical simulation results are used to validate and illustrate the main results of the paper.« less

  14. Crossbar Switches For Optical Data-Communication Networks

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P.

    1994-01-01

    Optoelectronic and electro-optical crossbar switches called "permutation engines" (PE's) developed to route packets of data through fiber-optic communication networks. Basic network concept described in "High-Speed Optical Wide-Area Data-Communication Network" (NPO-18983). Nonblocking operation achieved by decentralized switching and control scheme. Each packet routed up or down in each column of this 5-input/5-output permutation engine. Routing algorithm ensures each packet arrives at its designated output port without blocking any other packet that does not contend for same output port.

  15. Packet Scheduling Mechanism to Improve Quality of Short Flows and Low-Rate Flows

    NASA Astrophysics Data System (ADS)

    Yokota, Kenji; Asaka, Takuya; Takahashi, Tatsuro

    In recent years elephant flows are increasing by expansion of peer-to-peer (P2P) applications on the Internet. As a result, bandwidth is occupied by specific users triggering unfair resource allocation. The main packet-scheduling mechanism currently employed is first-in first-out (FIFO) where the available bandwidth of short flows is limited by elephant flows. Least attained service (LAS), which decides transfer priority of packets by the total amount of transferred data in all flows, was proposed to solve this problem. However, routers with LAS limit flows with large amount of transferred data even if they are low-rate. Therefore, it is necessary to improve the quality of low-rate flows with long holding times such as voice over Internet protocol (VoIP) applications. This paper proposes rate-based priority control (RBPC), which calculates the flow rate and control the priority by using it. Our proposed method can transfer short flows and low-rate flows in advance. Moreover, its fair performance is shown through simulations.

  16. Implementation of the Algorithm for Congestion control in the Dynamic Circuit Network (DCN)

    NASA Astrophysics Data System (ADS)

    Nalamwar, H. S.; Ivanov, M. A.; Buddhawar, G. U.

    2017-01-01

    Transport Control Protocol (TCP) incast congestion happens when a number of senders work in parallel with the same server where the high bandwidth and low latency network problem occurs. For many data center network applications such as a search engine, heavy traffic is present on such a server. Incast congestion degrades the entire performance as packets are lost at a server side due to buffer overflow, and as a result, the response time becomes longer. In this work, we focus on TCP throughput, round-trip time (RTT), receive window and retransmission. Our method is based on the proactive adjust of the TCP receive window before the packet loss occurs. We aim to avoid the wastage of the bandwidth by adjusting its size as per the number of packets. To avoid the packet loss, the ICTCP algorithm has been implemented in the data center network (ToR).

  17. Increasingly minimal bias routing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataineh, Abdulla; Court, Thomas; Roweth, Duncan

    2017-02-21

    A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).

  18. Implementation of a data packet generator using pattern matching for wearable ECG monitoring systems.

    PubMed

    Noh, Yun Hong; Jeong, Do Un

    2014-07-15

    In this paper, a packet generator using a pattern matching algorithm for real-time abnormal heartbeat detection is proposed. The packet generator creates a very small data packet which conveys sufficient crucial information for health condition analysis. The data packet envelopes real time ECG signals and transmits them to a smartphone via Bluetooth. An Android application was developed specifically to decode the packet and extract ECG information for health condition analysis. Several graphical presentations are displayed and shown on the smartphone. We evaluate the performance of abnormal heartbeat detection accuracy using the MIT/BIH Arrhythmia Database and real time experiments. The experimental result confirm our finding that abnormal heart beat detection is practically possible. We also performed data compression ratio and signal restoration performance evaluations to establish the usefulness of the proposed packet generator and the results were excellent.

  19. Protocol, Engineering Research Center, University of California, Santa Barbara

    DTIC Science & Technology

    2005-12-01

    minimizing the energy consumption in idle periods. We have designed an asynchronous wakeup schedule based on the theory of block designs. The idea is...performance of ad hoc networks through innovative packet scheduling (Baker). "* Developed a number of novel schemes to ensure loop freedom in on demand routing...network nodes to schedule their transmissions to avoid collisions (Garcia-Luna-Aceves). "* Designed and analyzed the Hybrid Activation Multiple Access (HAMA

  20. A Bibliography of Packet Radio Literature.

    DTIC Science & Technology

    1984-03-01

    r r* C.C.4. W . . . . . . . . .. . . . Gafni, E. and D. Bertsekas, "Distributed Algorithms for Generating Loop-Free Routes in Networks with Frequently...December 1980). Leiner, B., K. Klemba, and J. Tornow , "Packet Radio Networking," Computer World, pp. 26-37, 30, (September 27, 1982). Liu, J...34Distributed Routing and Relay Management in Mobile Packet Radio Networks," Proceedings IEEE COMPCON Fall 󈨔, p. 235-243 (1980). MacGregor, W ., J. Westcott, and

  1. Lightweight active router-queue management for multimedia networking

    NASA Astrophysics Data System (ADS)

    Parris, Mark; Jeffay, Kevin; Smith, F. D.

    1998-12-01

    The Internet research community is promoting active queue management in routers as a proactive means of addressing congestion in the Internet. Active queue management mechanisms such as Random Early Detection (RED) work well for TCP flows but can fail in the presence of unresponsive UDP flows. Recent proposals extend RED to strongly favor TCP and TCP-like flows and to actively penalize `misbehaving' flows. This is problematic for multimedia flows that, although potentially well-behaved, do not, or can not, satisfy the definition of a TCP-like flow. In this paper we investigate an extension to RED active queue management called Class-Based Thresholds (CBT). The goal of CBT is to reduce congestion in routers and to protect TCP from all UDP flows while also ensuring acceptable throughput and latency for well-behaved UDP flows. CBT attempts to realize a `better than best effort' service for well-behaved multimedia flows that is comparable to that achieved by a packet or link scheduling discipline, however, CBT does this by queue management rather than by scheduling. We present results of experiments comparing our mechanisms to plain RED and to FRED, a variant of RED designed to ensure fair allocation of bandwidth amongst flows. We also compare CBT to a packet scheduling scheme. The experiments show that CBT (1) realizes protection for TCP, and (2) provides throughput and end-to-end latency for tagged UDP flows, that is better than that under FRED and RED and comparable to that achieved by packet scheduling. Moreover CBT is a lighter-weight mechanism than FRED in terms of its state requirements and implementation complexity.

  2. Packet Traffic Dynamics Near Onset of Congestion in Data Communication Network Model

    NASA Astrophysics Data System (ADS)

    Lawniczak, A. T.; Tang, X.

    2006-05-01

    The dominant technology of data communication networks is the Packet Switching Network (PSN). It is a complex technology organized as various hierarchical layers according to the International Standard Organization (ISO) Open Systems Interconnect (OSI) Reference Model. The Network Layer of the ISO OSI Reference Model is responsible for delivering packets from their sources to their destinations and for dealing with congestion if it arises in a network. Thus, we focus on this layer and present an abstraction of the Network Layer of the ISO OSI Reference Model. Using this abstraction we investigate how onset of traffic congestion is affected for various routing algorithms by changes in network connection topology. We study how aggregate measures of network performance depend on network connection topology and routing. We explore packets traffic spatio-temporal dynamics near the phase transition point from free flow to congestion for various network connection topologies and routing algorithms. We consider static and adaptive routings. We present selected simulation results.

  3. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    NASA Astrophysics Data System (ADS)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  4. Network traffic behaviour near phase transition point

    NASA Astrophysics Data System (ADS)

    Lawniczak, A. T.; Tang, X.

    2006-03-01

    We explore packet traffic dynamics in a data network model near phase transition point from free flow to congestion. The model of data network is an abstraction of the Network Layer of the OSI (Open Systems Interconnect) Reference Model of packet switching networks. The Network Layer is responsible for routing packets across the network from their sources to their destinations and for control of congestion in data networks. Using the model we investigate spatio-temporal packets traffic dynamics near the phase transition point for various network connection topologies, and static and adaptive routing algorithms. We present selected simulation results and analyze them.

  5. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  6. OSLG: A new granting scheme in WDM Ethernet passive optical networks

    NASA Astrophysics Data System (ADS)

    Razmkhah, Ali; Rahbar, Akbar Ghaffarpour

    2011-12-01

    Several granting schemes have been proposed to grant transmission window and dynamic bandwidth allocation (DBA) in passive optical networks (PON). Generally, granting schemes suffer from bandwidth wastage of granted windows. Here, we propose a new granting scheme for WDM Ethernet PONs, called optical network unit (ONU) Side Limited Granting (OSLG) that conserves upstream bandwidth, thus resulting in decreasing queuing delay and packet drop ratio. In OSLG instead of optical line terminal (OLT), each ONU determines its transmission window. Two OSLG algorithms are proposed in this paper: the OSLG_GA algorithm that determines the size of its transmission window in such a way that the bandwidth wastage problem is relieved, and the OSLG_SC algorithm that saves unused bandwidth for more bandwidth utilization later on. The OSLG can be used as granting scheme of any DBA to provide better performance in the terms of packet drop ratio and queuing delay. Our performance evaluations show the effectiveness of OSLG in reducing packet drop ratio and queuing delay under different DBA techniques.

  7. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  8. An iteration algorithm for optimal network flows

    NASA Astrophysics Data System (ADS)

    Woong, C. J.

    1983-09-01

    A packet switching network has the desirable feature of rapidly handling short (bursty) messages of the type often found in computer communication systems. In evaluating packet switching networks, the average time delay per packet is one of the most important measures of performance. The problem of message routing to minimize time delay is analyzed here using two approaches, called "successive saturation' and "max-slack', for various traffic requirement matrices and networks with fixed topology and link capacities.

  9. A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication

    PubMed Central

    Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo

    2017-01-01

    Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934

  10. Internet traffic load balancing using dynamic hashing with flow volume

    NASA Astrophysics Data System (ADS)

    Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.

    2002-07-01

    Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.

  11. Improving the resolution for Lamb wave testing via a smoothed Capon algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Xuwei; Zeng, Liang; Lin, Jing; Hua, Jiadong

    2018-04-01

    Lamb wave testing is promising for damage detection and evaluation in large-area structures. The dispersion of Lamb waves is often unavoidable, restricting testing resolution and making the signal hard to interpret. A smoothed Capon algorithm is proposed in this paper to estimate the accurate path length of each wave packet. In the algorithm, frequency domain whitening is firstly used to obtain the transfer function in the bandwidth of the excitation pulse. Subsequently, wavenumber domain smoothing is employed to reduce the correlation between wave packets. Finally, the path lengths are determined by distance domain searching based on the Capon algorithm. Simulations are applied to optimize the number of smoothing times. Experiments are performed on an aluminum plate consisting of two simulated defects. The results demonstrate that spatial resolution is improved significantly by the proposed algorithm.

  12. Prediction-Based Energy Saving Mechanism in 3GPP NB-IoT Networks.

    PubMed

    Lee, Jinseong; Lee, Jaiyong

    2017-09-01

    The current expansion of the Internet of things (IoT) demands improved communication platforms that support a wide area with low energy consumption. The 3rd Generation Partnership Project introduced narrowband IoT (NB-IoT) as IoT communication solutions. NB-IoT devices should be available for over 10 years without requiring a battery replacement. Thus, a low energy consumption is essential for the successful deployment of this technology. Given that a high amount of energy is consumed for radio transmission by the power amplifier, reducing the uplink transmission time is key to ensure a long lifespan of an IoT device. In this paper, we propose a prediction-based energy saving mechanism (PBESM) that is focused on enhanced uplink transmission. The mechanism consists of two parts: first, the network architecture that predicts the uplink packet occurrence through a deep packet inspection; second, an algorithm that predicts the processing delay and pre-assigns radio resources to enhance the scheduling request procedure. In this way, our mechanism reduces the number of random accesses and the energy consumed by radio transmission. Simulation results showed that the energy consumption using the proposed PBESM is reduced by up to 34% in comparison with that in the conventional NB-IoT method.

  13. Prediction-Based Energy Saving Mechanism in 3GPP NB-IoT Networks

    PubMed Central

    2017-01-01

    The current expansion of the Internet of things (IoT) demands improved communication platforms that support a wide area with low energy consumption. The 3rd Generation Partnership Project introduced narrowband IoT (NB-IoT) as IoT communication solutions. NB-IoT devices should be available for over 10 years without requiring a battery replacement. Thus, a low energy consumption is essential for the successful deployment of this technology. Given that a high amount of energy is consumed for radio transmission by the power amplifier, reducing the uplink transmission time is key to ensure a long lifespan of an IoT device. In this paper, we propose a prediction-based energy saving mechanism (PBESM) that is focused on enhanced uplink transmission. The mechanism consists of two parts: first, the network architecture that predicts the uplink packet occurrence through a deep packet inspection; second, an algorithm that predicts the processing delay and pre-assigns radio resources to enhance the scheduling request procedure. In this way, our mechanism reduces the number of random accesses and the energy consumed by radio transmission. Simulation results showed that the energy consumption using the proposed PBESM is reduced by up to 34% in comparison with that in the conventional NB-IoT method. PMID:28862675

  14. Objective research of auscultation signals in Traditional Chinese Medicine based on wavelet packet energy and support vector machine.

    PubMed

    Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei

    2010-01-01

    This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.

  15. High-Throughput, Survivable Protocols for CDMA Packet-Radio Networks (SRNTN-74)

    DTIC Science & Technology

    1990-03-01

    published in a paper by Jubin and Tornow [2], however the reader must be aware that the current SURAP algorithms deviate slightly from this reference...threading policy outlined by Jubin and Tornow is equivalent to a link- level window size of one, meaning a policy of not transmitting a new packet to a... Tornow . The DARPA packet-radio network protocols. Proceedings of the IEEE, 75(1):21-32, January 1987. [31 N. Gower and J. Jubin. Congestion control using

  16. Adaptive control of the packet transmission period with solar energy harvesting prediction in wireless sensor networks.

    PubMed

    Kwon, Kideok; Yang, Jihoon; Yoo, Younghwan

    2015-04-24

    A number of research works has studied packet scheduling policies in energy scavenging wireless sensor networks, based on the predicted amount of harvested energy. Most of them aim to achieve energy neutrality, which means that an embedded system can operate perpetually while meeting application requirements. Unlike other renewable energy sources, solar energy has the feature of distinct periodicity in the amount of harvested energy over a day. Using this feature, this paper proposes a packet transmission control policy that can enhance the network performance while keeping sensor nodes alive. Furthermore, this paper suggests a novel solar energy prediction method that exploits the relation between cloudiness and solar radiation. The experimental results and analyses show that the proposed packet transmission policy outperforms others in terms of the deadline miss rate and data throughput. Furthermore, the proposed solar energy prediction method can predict more accurately than others by 6.92%.

  17. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    NASA Astrophysics Data System (ADS)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  18. Radio Resource Allocation on Complex 4G Wireless Cellular Networks

    NASA Astrophysics Data System (ADS)

    Psannis, Kostas E.

    2015-09-01

    In this article we consider the heuristic algorithm which improves step by step wireless data delivery over LTE cellular networks by using the total transmit power with the constraint on users’ data rates, and the total throughput with the constraints on the total transmit power as well as users’ data rates, which are jointly integrated into a hybrid-layer design framework to perform radio resource allocation for multiple users, and to effectively decide the optimal system parameter such as modulation and coding scheme (MCS) in order to adapt to the varying channel quality. We propose new heuristic algorithm which balances the accessible data rate, the initial data rates of each user allocated by LTE scheduler, the priority indicator which signals delay- throughput- packet loss awareness of the user, and the buffer fullness by achieving maximization of radio resource allocation for multiple users. It is noted that the overall performance is improved with the increase in the number of users, due to multiuser diversity. Experimental results illustrate and validate the accuracy of the proposed methodology.

  19. Fast WEP-Key Recovery Attack Using Only Encrypted IP Packets

    NASA Astrophysics Data System (ADS)

    Teramura, Ryoichi; Asakura, Yasuo; Ohigashi, Toshihiro; Kuwakado, Hidenori; Morii, Masakatu

    Conventional efficient key recovery attacks against Wired Equivalent Privacy (WEP) require specific initialization vectors or specific packets. Since it takes much time to collect the packets sufficiently, any active attack should be performed. An Intrusion Detection System (IDS), however, will be able to prevent the attack. Since the attack logs are stored at the servers, it is possible to prevent such an attack. This paper proposes an algorithm for recovering a 104-bit WEP key from any IP packets in a realistic environment. This attack needs about 36, 500 packets with a success probability 0.5, and the complexity of our attack is equivalent to about 220 computations of the RC4 key setups. Since our attack is passive, it is difficult for both WEP users and administrators to detect our attack.

  20. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  1. Design of a Multicast Optical Packet Switch Based on Fiber Bragg Grating Technology for Future Networks

    NASA Astrophysics Data System (ADS)

    Cheng, Yuh-Jiuh; Yeh, Tzuoh-Chyau; Cheng, Shyr-Yuan

    2011-09-01

    In this paper, a non-blocking multicast optical packet switch based on fiber Bragg grating technology with optical output buffers is proposed. Only the header of optical packets is converted to electronic signals to control the fiber Bragg grating array of input ports and the packet payloads should be transparently destined to their output ports so that the proposed switch can reduce electronic interfaces as well as the bit rate. The modulation and the format of packet payloads may be non-standard where packet payloads could also include different wavelengths for increasing the volume of traffic. The advantage is obvious: the proposed switch could transport various types of traffic. An easily implemented architecture which can provide multicast services is also presented. An optical output buffer is designed to queue the packets if more than one incoming packet should reach to the same destination output port or including any waiting packets in optical output buffer that will be sent to the output port at a time slot. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The fiber Bragg grating arrays for both input ports and output ports are designed for routing incoming packets using optical code division multiple access technology.

  2. MOLA II Laser Transmitter Calibration and Performance. 1.2

    NASA Technical Reports Server (NTRS)

    Afzal, Robert S.; Smith, David E. (Technical Monitor)

    1997-01-01

    The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.

  3. SURGNET: An Integrated Surgical Data Transmission System for Telesurgery.

    PubMed

    Natarajan, Sriram; Ganz, Aura

    2009-01-01

    Remote surgery information requires quick and reliable transmission between the surgeon and the patient site. However, the networks that interconnect the surgeon and patient sites are usually time varying and lossy which can cause packet loss and delay jitter. In this paper we propose SURGNET, a telesurgery system for which we developed the architecture, algorithms and implemented it on a testbed. The algorithms include adaptive packet prediction and buffer time adjustment techniques which reduce the negative effects caused by the lossy and time varying networks. To evaluate the proposed SURGNET system, at the therapist site, we implemented a therapist panel which controls the force feedback device movements and provides image analysis functionality. At the patient site we controlled a virtual reality applet built in Matlab. The varying network conditions were emulated using NISTNet emulator. Our results show that even for severe packet loss and variable delay jitter, the proposed integrated synchronization techniques significantly improve SURGNET performance.

  4. Comparison Between Three Different Types of Routing Algorithms of Network on Chip

    NASA Astrophysics Data System (ADS)

    Soni, Neetu; Deshmukh, Khemraj

    Network on Chip (NoC) is an on-chip communication technology in which a large number of processing elements and storage blocks are integrated on a single chip. Due to scalability, adaptive nature, well resource utilization NoCs have become popular in and has efficiently replaced SoCs. NoCs performance depends mainly on the type of routing algorithm chosen. In this paper three different types of routing algorithms are being compared firstly one is deterministic routing (XY routing algorithm), secondly three partially adaptive routing (West-first, North-last and Negative-first) and two adaptive routing (DyAD, OE) are being compared with respect to Packet Injection Rate (PIR) of load for random traffic pattern for 4 × 4 mesh topology. All these comparison and simulation is done in NOXIM 2.3.1 simulator which is a cycle accurate systemC based simulator. The distribution of packets is Poisson type with Buffer depth (number of buffers) of input channel FIFO is 8. Packet size is taken as 8 bytes. The simulation time is taken 50,000 cycles. We found that XY routing is better when the PIR is low. The partially adaptive routing is good when the PIR is moderate. DyAD routing is suited when the load i.e. PIR is high.

  5. System-level power optimization for real-time distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.

  6. A game theory-based obstacle avoidance routing protocol for wireless sensor networks.

    PubMed

    Guan, Xin; Wu, Huayang; Bi, Shujun

    2011-01-01

    The obstacle avoidance problem in geographic forwarding is an important issue for location-based routing in wireless sensor networks. The presence of an obstacle leads to several geographic routing problems such as excessive energy consumption and data congestion. Obstacles are hard to avoid in realistic environments. To bypass obstacles, most routing protocols tend to forward packets along the obstacle boundaries. This leads to a situation where the nodes at the boundaries exhaust their energy rapidly and the obstacle area is diffused. In this paper, we introduce a novel routing algorithm to solve the obstacle problem in wireless sensor networks based on a game-theory model. Our algorithm forms a concave region that cannot forward packets to achieve the aim of improving the transmission success rate and decreasing packet transmission delays. We consider the residual energy, out-degree and forwarding angle to determine the forwarding probability and payoff function of forwarding candidates. This achieves the aim of load balance and reduces network energy consumption. Simulation results show that based on the average delivery delay, energy consumption and packet delivery ratio performances our protocol is superior to other traditional schemes.

  7. Hybrid Packet-Pheromone-Based Probabilistic Routing for Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Kashkouli Nejad, Keyvan; Shawish, Ahmed; Jiang, Xiaohong; Horiguchi, Susumu

    Ad-Hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Minimal configuration and quick deployment make Ad-Hoc networks suitable for emergency situations like natural disasters or military conflicts. The current Ad-Hoc networks can only support either high mobility or high transmission rate at a time because they employ static approaches in their routing schemes. However, due to the continuous expansion of the Ad-Hoc network size, node-mobility and transmission rate, the development of new adaptive and dynamic routing schemes has become crucial. In this paper we propose a new routing scheme to support high transmission rates and high node-mobility simultaneously in a big Ad-Hoc network, by combining a new proposed packet-pheromone-based approach with the Hint Based Probabilistic Protocol (HBPP) for congestion avoidance with dynamic path selection in packet forwarding process. Because of using the available feedback information, the proposed algorithm does not introduce any additional overhead. The extensive simulation-based analysis conducted in this paper indicates that the proposed algorithm offers small packet-latency and achieves a significantly higher delivery probability in comparison with the available Hint-Based Probabilistic Protocol (HBPP).

  8. An Underlay Communication Channel for 5G Cognitive Mesh Networks: Packet Design, Implementation, Analysis, and Experimental Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarek Haddadin; Stephen Andrew Laraway; Arslan Majid

    This paper proposes and presents the design and implementation of an underlay communication channel (UCC) for 5G cognitive mesh networks. The UCC builds its waveform based on filter bank multicarrier spread spectrum (FB-MCSS) signaling. The use of this novel spread spectrum signaling allows the device-to-device (D2D) user equipments (UEs) to communicate at a level well below noise temperature and hence, minimize taxation on macro-cell/small-cell base stations and their UEs in 5G wireless systems. Moreover, the use of filter banks allows us to avoid those portions of the spectrum that are in use by macro-cell and small-cell users. Hence, both D2D-to-cellularmore » and cellular-to-D2D interference will be very close to none. We propose a specific packet for UCC and develop algorithms for packet detection, timing acquisition and tracking, as well as channel estimation and equalization. We also present the detail of an implementation of the proposed transceiver on a software radio platform and compare our experimental results with those from a theoretical analysis of our packet detection algorithm.« less

  9. A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks.

    PubMed

    Jin, Zhigang; Wang, Ning; Su, Yishan; Yang, Qiuling

    2018-02-07

    Underwater acoustic sensor networks (UASNs) have become a hot research topic. In UASNs, nodes can be affected by ocean currents and external forces, which could result in sudden link disruption. Therefore, designing a flexible and efficient link disruption restoration mechanism to ensure the network connectivity is a challenge. In the paper, we propose a glider-assisted restoration mechanism which includes link disruption recognition and related link restoring mechanism. In the link disruption recognition mechanism, the cluster heads collect the link disruption information and then schedule gliders acting as relay nodes to restore the disrupted link. Considering the glider's sawtooth motion, we design a relay location optimization algorithm with a consideration of both the glider's trajectory and acoustic channel attenuation model. The utility function is established by minimizing the channel attenuation and the optimal location of glider is solved by a multiplier method. The glider-assisted restoration mechanism can greatly improve the packet delivery rate and reduce the communication energy consumption and it is more general for the restoration of different link disruption scenarios. The simulation results show that glider-assisted restoration mechanism can improve the delivery rate of data packets by 15-33% compared with cooperative opportunistic routing (OVAR), the hop-by-hop vector-based forwarding (HH-VBF) and the vector based forward (VBF) methods, and reduce communication energy consumption by 20-58% for a typical network's setting.

  10. A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks

    PubMed Central

    Wang, Ning; Su, Yishan; Yang, Qiuling

    2018-01-01

    Underwater acoustic sensor networks (UASNs) have become a hot research topic. In UASNs, nodes can be affected by ocean currents and external forces, which could result in sudden link disruption. Therefore, designing a flexible and efficient link disruption restoration mechanism to ensure the network connectivity is a challenge. In the paper, we propose a glider-assisted restoration mechanism which includes link disruption recognition and related link restoring mechanism. In the link disruption recognition mechanism, the cluster heads collect the link disruption information and then schedule gliders acting as relay nodes to restore the disrupted link. Considering the glider’s sawtooth motion, we design a relay location optimization algorithm with a consideration of both the glider’s trajectory and acoustic channel attenuation model. The utility function is established by minimizing the channel attenuation and the optimal location of glider is solved by a multiplier method. The glider-assisted restoration mechanism can greatly improve the packet delivery rate and reduce the communication energy consumption and it is more general for the restoration of different link disruption scenarios. The simulation results show that glider-assisted restoration mechanism can improve the delivery rate of data packets by 15–33% compared with cooperative opportunistic routing (OVAR), the hop-by-hop vector-based forwarding (HH-VBF) and the vector based forward (VBF) methods, and reduce communication energy consumption by 20–58% for a typical network’s setting. PMID:29414898

  11. On Maximizing the Throughput of Packet Transmission under Energy Constraints.

    PubMed

    Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng

    2018-06-23

    More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.

  12. The congestion control algorithm based on queue management of each node in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Wei, Yifei; Chang, Lin; Wang, Yali; Wang, Gaoping

    2016-12-01

    This paper proposes an active queue management mechanism, considering the node's own ability and its importance in the network to set the queue threshold. As the network load increases, local congestion of mobile ad hoc network may lead to network performance degradation, hot node's energy consumption increase even failure. If small energy nodes congested because of forwarding data packets, then when it is used as the source node will cause a lot of packet loss. This paper proposes an active queue management mechanism, considering the node's own ability and its importance in the network to set the queue threshold. Controlling nodes buffer queue in different levels of congestion area probability by adjusting the upper limits and lower limits, thus nodes can adjust responsibility of forwarding data packets according to their own situation. The proposed algorithm will slow down the send rate hop by hop along the data package transmission direction from congestion node to source node so that to prevent further congestion from the source node. The simulation results show that, the algorithm can better play the data forwarding ability of strong nodes, protect the weak nodes, can effectively alleviate the network congestion situation.

  13. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  14. Signals of Opportunity Navigation Using Wi-Fi Signals

    DTIC Science & Technology

    2011-03-24

    Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a

  15. Automatic detection of blood versus non-blood regions on intravascular ultrasound (IVUS) images using wavelet packet signatures

    NASA Astrophysics Data System (ADS)

    Katouzian, Amin; Baseri, Babak; Konofagou, Elisa E.; Laine, Andrew F.

    2008-03-01

    Intravascular ultrasound (IVUS) has been proven a reliable imaging modality that is widely employed in cardiac interventional procedures. It can provide morphologic as well as pathologic information on the occluded plaques in the coronary arteries. In this paper, we present a new technique using wavelet packet analysis that differentiates between blood and non-blood regions on the IVUS images. We utilized the multi-channel texture segmentation algorithm based on the discrete wavelet packet frames (DWPF). A k-mean clustering algorithm was deployed to partition the extracted textural features into blood and non-blood in an unsupervised fashion. Finally, the geometric and statistical information of the segmented regions was used to estimate the closest set of pixels to the lumen border and a spline curve was fitted to the set. The presented algorithm may be helpful in delineating the lumen border automatically and more reliably prior to the process of plaque characterization, especially with 40 MHz transducers, where appearance of the red blood cells renders the border detection more challenging, even manually. Experimental results are shown and they are quantitatively compared with manually traced borders by an expert. It is concluded that our two dimensional (2-D) algorithm, which is independent of the cardiac and catheter motions performs well in both in-vivo and in-vitro cases.

  16. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling.

    PubMed

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-09-18

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols.

  17. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling

    PubMed Central

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-01-01

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols. PMID:26393608

  18. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  19. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  20. A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications

    NASA Astrophysics Data System (ADS)

    Entezari-Maleki, Reza; Movaghar, Ali

    Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.

  1. Single-Pass Serial Scheduling Heuristic for Eglin AFB Range Services Division Schedule

    DTIC Science & Technology

    2009-06-01

    scheduling tool for this RCPSP. Research on a schedule improvement metaheuristic and coding of the complete algorithm is required before it can be...a schedule better by applying metaheuristic improvement algorithms to a feasible schedule after it is created. 2.5.1. Greedy Algorithm The...next available position, the algorithm will not utilize all the available range time and manpower. An improvement metaheuristic is required to

  2. A packet data compressor

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Choi, Junho

    1995-01-01

    We are in the preliminary stages of creating an operational system for losslessly compressing packet data streams. The end goal is to reduce costs. Real world constraints include transmission in the presence of error, tradeoffs between the costs of compression and the costs of transmission and storage, and imperfect knowledge of the data streams to be transmitted. The overall method is to bring together packets of similar type, split the data into bit fields, and test a large number of compression algorithms. Preliminary results are very encouraging, typically offering compression factors substantially higher than those obtained with simpler generic byte stream compressors, such as Unix Compress and HA 0.98.

  3. Rejection of the maternal electrocardiogram in the electrohysterogram signal.

    PubMed

    Leman, H; Marque, C

    2000-08-01

    The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.

  4. Estimating Angle-of-Arrival and Time-of-Flight for Multipath Components Using WiFi Channel State Information.

    PubMed

    Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil

    2018-05-29

    Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.

  5. Call for Papers: Photonics in Switching

    NASA Astrophysics Data System (ADS)

    Wosinska, Lena; Glick, Madeleine

    2006-04-01

    Call for Papers: Photonics in Switching

    Guest Editors:

    Lena Wosinska, Royal Institute of Technology (KTH) / ICT Sweden Madeleine Glick, Intel Research, Cambridge, UK

    Technologies based on DWDM systems allow data transmission with bit rates of Tbit/s on a single fiber. To facilitate this enormous transmission volume, high-capacity and high-speed network nodes become inevitable in the optical network. Wideband switching, WDM switching, optical burst switching (OBS), and optical packet switching (OPS) are promising technologies for harnessing the bandwidth of WDM optical fiber networks in a highly flexible and efficient manner. As a number of key optical component technologies approach maturity, photonics in switching is becoming an increasingly attractive and practical solution for the next-generation of optical networks. The scope of this special issue is focused on the technology and architecture of optical switching nodes, including the architectural and algorithmic aspects of high-speed optical networks.

    Scope of Submission

    The scope of the papers includes, but is not limited to, the following topics:
    • WDM node architectures
    • Novel device technologies enabling photonics in switching, such as optical switch fabrics, optical memory, and wavelength conversion
    • Routing protocols
    • WDM switching and routing
    • Quality of service
    • Performance measurement and evaluation
    • Next-generation optical networks: architecture, signaling, and control
    • Traffic measurement and field trials
    • Optical burst and packet switching
    • OBS/OPS node architectures
    • Burst/Packet scheduling and routing algorithms
    • Contention resolution/avoidance strategies
    • Services and applications for OBS/OPS (e.g., grid networks, storage-area networks, etc.)
    • Burst assembly and ingress traffic shaping
    • Hybrid OBS/TDM or OBS/wavelength routing

    Manuscript Submission

    To submit to this special issue, follow the normal procedure for submission to JON and select ``Photonics in Switching' in the features indicator of the online submission form. For all other questions relating to this feature issue, please send an e-mail to jon@osa.org, subject line ``Photonics in Switching.' Additional information can be found on the JON website: http://www.osa-jon.org/journal/jon/author.cfm. Submission Deadline: 15 September 2006

  6. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  7. Production scheduling with ant colony optimization

    NASA Astrophysics Data System (ADS)

    Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu

    2017-10-01

    The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.

  8. IDMA: improving the defense against malicious attack for mobile ad hoc networks based on ARIP protocol

    NASA Astrophysics Data System (ADS)

    Peng, Chaorong; Chen, Chang Wen

    2008-04-01

    Malicious nodes are mounting increasingly sophisticated attacking operations on the Mobile Ad Hoc Networks (MANETs). This is mainly because the IP-based MANETs are vulnerable to attacks by various malicious nodes. However, the defense against malicious attack can be improved when a new layer of network architecture can be developed to separate true IP address from disclosing to the malicious nodes. In this paper, we propose a new algorithm to improve the defense against malicious attack (IDMA) that is based on a recently developed Assignment Router Identify Protocol (ARIP) for the clustering-based MANET management. In the ARIP protocol, we design the ARIP architecture based on the new Identity instead of the vulnerable IP addresses to provide the required security that is embedded seamlessly into the overall network architecture. We make full use of ARIP's special property to monitor gateway forward packets by Reply Request Route Packets (RREP) without additional intrusion detection layer. We name this new algorithm IDMA because of its inherent capability to improve the defense against malicious attacks. Through IDMA, a watching algorithm can be established so as to counterattack the malicious node in the routing path when it unusually drops up packets. We provide analysis examples for IDMA for the defense against a malicious node that disrupts the route discovery by impersonating the destination, or by responding with state of corrupted routing information, or by disseminating forged control traffic. The IDMA algorithm is able to counterattack the malicious node in the cases when the node lunch DoS attack by broadcast a large number of route requests, or make Target traffic congestion by delivering huge mount of data; or spoof the IP addresses and send forge packets with a fake ID to the same Target causing traffic congestion at that destination. We have implemented IDMA algorism using the GloMoSim simulator and have demonstrated its performance under a variety of operational conditions.

  9. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  10. Pre-Scheduled and Self Organized Sleep-Scheduling Algorithms for Efficient K-Coverage in Wireless Sensor Networks

    PubMed Central

    Hwang, I-Shyan

    2017-01-01

    The K-coverage configuration that guarantees coverage of each location by at least K sensors is highly popular and is extensively used to monitor diversified applications in wireless sensor networks. Long network lifetime and high detection quality are the essentials of such K-covered sleep-scheduling algorithms. However, the existing sleep-scheduling algorithms either cause high cost or cannot preserve the detection quality effectively. In this paper, the Pre-Scheduling-based K-coverage Group Scheduling (PSKGS) and Self-Organized K-coverage Scheduling (SKS) algorithms are proposed to settle the problems in the existing sleep-scheduling algorithms. Simulation results show that our pre-scheduled-based KGS approach enhances the detection quality and network lifetime, whereas the self-organized-based SKS algorithm minimizes the computation and communication cost of the nodes and thereby is energy efficient. Besides, SKS outperforms PSKGS in terms of network lifetime and detection quality as it is self-organized. PMID:29257078

  11. From Signature-Based Towards Behaviour-Based Anomaly Detection (Extended Abstract)

    DTIC Science & Technology

    2010-11-01

    data acquisition can serve as sensors. De- facto standard for IP flow monitoring is NetFlow format. Although NetFlow was originally developed by Cisco...packets with some common properties that pass through a network device. These collected flows are exported to an external device, the NetFlow ...Thanks to the network-based approach using NetFlow data, the detection algorithm is host independent and highly scalable. Deep Packet Inspection

  12. Enhanced round robin CPU scheduling with burst time based time quantum

    NASA Astrophysics Data System (ADS)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  13. Genetic algorithm to solve the problems of lectures and practicums scheduling

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Apriani, R.; Sawaluddin; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Generally, the scheduling process is done manually. However, this method has a low accuracy level, along with possibilities that a scheduled process collides with another scheduled process. When doing theory class and practicum timetable scheduling process, there are numerous problems, such as lecturer teaching schedule collision, schedule collision with another schedule, practicum lesson schedules that collides with theory class, and the number of classrooms available. In this research, genetic algorithm is implemented to perform theory class and practicum timetable scheduling process. The algorithm will be used to process the data containing lists of lecturers, courses, and class rooms, obtained from information technology department at University of Sumatera Utara. The result of scheduling process using genetic algorithm is the most optimal timetable that conforms to available time slots, class rooms, courses, and lecturer schedules.

  14. Fast packet switching algorithms for dynamic resource control over ATM networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsang, R.P.; Keattihananant, P.; Chang, T.

    1996-12-01

    Real-time continuous media traffic, such as digital video and audio, is expected to comprise a large percentage of the network load on future high speed packet switch networks such as ATM. A major feature which distinguishes high speed networks from traditional slower speed networks is the large amount of data the network must process very quickly. For efficient network usage, traffic control mechanisms are essential. Currently, most mechanisms for traffic control (such as flow control) have centered on the support of Available Bit Rate (ABR), i.e., non real-time, traffic. With regard to ATM, for ABR traffic, two major types ofmore » schemes which have been proposed are rate- control and credit-control schemes. Neither of these schemes are directly applicable to Real-time Variable Bit Rate (VBR) traffic such as continuous media traffic. Traffic control for continuous media traffic is an inherently difficult problem due to the time- sensitive nature of the traffic and its unpredictable burstiness. In this study, we present a scheme which controls traffic by dynamically allocating/de- allocating resources among competing VCs based upon their real-time requirements. This scheme incorporates a form of rate- control, real-time burst-level scheduling and link-link flow control. We show analytically potential performance improvements of our rate- control scheme and present a scheme for buffer dimensioning. We also present simulation results of our schemes and discuss the tradeoffs inherent in maintaining high network utilization and statistically guaranteeing many users` Quality of Service.« less

  15. The serial message-passing schedule for LDPC decoding algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  16. Identification of speech transients using variable frame rate analysis and wavelet packets.

    PubMed

    Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung

    2006-01-01

    Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.

  17. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  18. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  19. Integrated SeismoGeodetic Systsem with High-Resolution, Real-Time GNSS and Accelerometer Observation For Earthquake Early Warning Application.

    NASA Astrophysics Data System (ADS)

    Passmore, P. R.; Jackson, M.; Zimakov, L. G.; Raczka, J.; Davidson, P.

    2014-12-01

    The key requirements for Earthquake Early Warning and other Rapid Event Notification Systems are: Quick delivery of digital data from a field station to the acquisition and processing center; Data integrity for real-time earthquake notification in order to provide warning prior to significant ground shaking in the given target area. These two requirements are met in the recently developed Trimble SG160-09 SeismoGeodetic System, which integrates both GNSS and acceleration measurements using the Kalman filter algorithm to create a new high-rate (200 sps), real-time displacement with sufficient accuracy and very low latency for rapid delivery of the acquired data to a processing center. The data acquisition algorithm in the SG160-09 System provides output of both acceleration and displacement digital data with 0.2 sec delay. This is a significant reduction in the time interval required for real-time transmission compared to data delivery algorithms available in digitizers currently used in other Earthquake Early Warning networks. Both acceleration and displacement data are recorded and transmitted to the processing site in a specially developed Multiplexed Recording Format (MRF) that minimizes the bandwidth required for real-time data transmission. In addition, a built in algorithm calculates the τc and Pd once the event is declared. The SG160-09 System keeps track of what data has not been acknowledged and re-transmits the data giving priority to current data. Modified REF TEK Protocol Daemon (RTPD) receives the digital data and acknowledges data received without error. It forwards this "good" data to processing clients of various real-time data processing software including Earthworm and SeisComP3. The processing clients cache packets when a data gap occurs due to a dropped packet or network outage. The cache packet time is settable, but should not exceed 0.5 sec in the Earthquake Early Warning network configuration. The rapid data transmission algorithm was tested with different communication media, including Internet, DSL, Wi-Fi, GPRS, etc. The test results show that the data latency via most communication media do not exceed 0.5 sec nominal from a first sample in the data packet. Detailed acquisition algorithm and results of data transmission via different communication media are presented.

  20. Intelligent deflection routing in buffer-less networks.

    PubMed

    Haeri, Soroush; Trajković, Ljiljana

    2015-02-01

    Deflection routing is employed to ameliorate packet loss caused by contention in buffer-less architectures such as optical burst-switched networks. The main goal of deflection routing is to successfully deflect a packet based only on a limited knowledge that network nodes possess about their environment. In this paper, we present a framework that introduces intelligence to deflection routing (iDef). iDef decouples the design of the signaling infrastructure from the underlying learning algorithm. It consists of a signaling and a decision-making module. Signaling module implements a feedback management protocol while the decision-making module implements a reinforcement learning algorithm. We also propose several learning-based deflection routing protocols, implement them in iDef using the ns-3 network simulator, and compare their performance.

  1. Resource Management in QoS-Aware Wireless Cellular Networks

    ERIC Educational Resources Information Center

    Zhang, Zhi

    2011-01-01

    Emerging broadband wireless networks that support high speed packet data with heterogeneous quality of service (QoS) requirements demand more flexible and efficient use of the scarce spectral resource. Opportunistic scheduling exploits the time-varying, location-dependent channel conditions to achieve multiuser diversity. In this work, we study…

  2. Asphalt Raking. Instructor Manual. Trainee Manual.

    ERIC Educational Resources Information Center

    Laborers-AGC Education and Training Fund, Pomfret Center, CT.

    This packet consists of the instructor and trainee manuals for an asphalt raking course. The instructor manual contains a course schedule for 4 days of instruction, content outline, and instructor outline. The trainee manual is divided into five sections: safety, asphalt basics, placing methods, repair and patching, and clean-up and maintenance.…

  3. Characterization of Extremely Lightweight Intrusion Detection (ELIDe) Power Utilization with Varying Throughput and Payload Sizes

    DTIC Science & Technology

    2015-09-01

    Extremely Lightweight Intrusion Detection (ELIDe) algorithm on an Android -based mobile device. Our results show that the hashing and inner product...approximately 2.5 megabits per second (assuming a normal distribution of packet sizes) with no significant packet loss. 15. SUBJECT TERMS ELIDe, Android , pcap...system (OS). To run ELIDe, the current version was ported for use on Android .4 2.1 Mobile Device After ELIDe was ported to the Android mobile

  4. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    NASA Astrophysics Data System (ADS)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  5. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    NASA Astrophysics Data System (ADS)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  6. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    NASA Astrophysics Data System (ADS)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  7. History-based route selection for reactive ad hoc routing protocols

    NASA Astrophysics Data System (ADS)

    Medidi, Sirisha; Cappetto, Peter

    2007-04-01

    Ad hoc networks rely on cooperation in order to operate, but in a resource constrained environment not all nodes behave altruistically. Selfish nodes preserve their own resources and do not forward packets not in their own self interest. These nodes degrade the performance of the network, but judicious route selection can help maintain performance despite this behavior. Many route selection algorithms place importance on shortness of the route rather than its reliability. We introduce a light-weight route selection algorithm that uses past behavior to judge the quality of a route rather than solely on the length of the route. It draws information from the underlying routing layer at no extra cost and selects routes with a simple algorithm. This technique maintains this data in a small table, which does not place a high cost on memory. History-based route selection's minimalism suits the needs the portable wireless devices and is easy to implement. We implemented our algorithm and tested it in the ns2 environment. Our simulation results show that history-based route selection achieves higher packet delivery and improved stability than its length-based counterpart.

  8. SOSS User Guide

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan; Gridnev, Sergei; Windhorst, Robert D.

    2015-01-01

    This User Guide describes SOSS (Surface Operations Simulator and Scheduler) software build and graphic user interface. SOSS is a desktop application that simulates airport surface operations in fast time using traffic management algorithms. It moves aircraft on the airport surface based on information provided by scheduling algorithm prototypes, monitors separation violation and scheduling conformance, and produces scheduling algorithm performance data.

  9. Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing

    NASA Astrophysics Data System (ADS)

    Suma, T.; Murugesan, R.

    2018-04-01

    The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.

  10. Optimal Scheduling and Fair Service Policy for STDMA in Underwater Networks with Acoustic Communications

    PubMed Central

    2018-01-01

    In this work, a multi-hop string network with a single sink node is analyzed. A periodic optimal scheduling for TDMA operation that considers the characteristic long propagation delay of the underwater acoustic channel is presented. This planning of transmissions is obtained with the help of a new geometrical method based on a 2D lattice in the space-time domain. In order to evaluate the performance of this optimal scheduling, two service policies have been compared: FIFO and Round-Robin. Simulation results, including achievable throughput, packet delay, and queue length, are shown. The network fairness has also been quantified with the Gini index. PMID:29462966

  11. A method of operation scheduling based on video transcoding for cluster equipment

    NASA Astrophysics Data System (ADS)

    Zhou, Haojie; Yan, Chun

    2018-04-01

    Because of the cluster technology in real-time video transcoding device, the application of facing the massive growth in the number of video assignments and resolution and bit rate of diversity, task scheduling algorithm, and analyze the current mainstream of cluster for real-time video transcoding equipment characteristics of the cluster, combination with the characteristics of the cluster equipment task delay scheduling algorithm is proposed. This algorithm enables the cluster to get better performance in the generation of the job queue and the lower part of the job queue when receiving the operation instruction. In the end, a small real-time video transcode cluster is constructed to analyze the calculation ability, running time, resource occupation and other aspects of various algorithms in operation scheduling. The experimental results show that compared with traditional clustering task scheduling algorithm, task delay scheduling algorithm has more flexible and efficient characteristics.

  12. Scheduling for energy and reliability management on multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  13. A Genetic Algorithm Tool (splicer) for Complex Scheduling Problems and the Space Station Freedom Resupply Problem

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Valenzuela-Rendon, Manuel

    1993-01-01

    The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.

  14. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  15. JOBS. A Partnership between Education and Industry.

    ERIC Educational Resources Information Center

    Mann, Sandra; And Others

    This packet contains 15 lessons developed in a workplace basic skills project for the metal casting industry established jointly by Central Alabama Community College and Robinson Foundry, Inc. The lessons cover the following topics: (1) green sand schedule; (2) the core room; (3) the core room (continued); (4) figuring time; (5) the cleaning room;…

  16. Hop Optimization and Relay Node Selection in Multi-hop Wireless Ad-Hoc Networks

    NASA Astrophysics Data System (ADS)

    Li, Xiaohua(Edward)

    In this paper we propose an efficient approach to determine the optimal hops for multi-hop ad hoc wireless networks. Based on the assumption that nodes use successive interference cancellation (SIC) and maximal ratio combining (MRC) to deal with mutual interference and to utilize all the received signal energy, we show that the signal-to-interference-plus-noise ratio (SINR) of a node is determined only by the nodes before it, not the nodes after it, along a packet forwarding path. Based on this observation, we propose an iterative procedure to select the relay nodes and to calculate the path SINR as well as capacity of an arbitrary multi-hop packet forwarding path. The complexity of the algorithm is extremely low, and scaling well with network size. The algorithm is applicable in arbitrarily large networks. Its performance is demonstrated as desirable by simulations. The algorithm can be helpful in analyzing the performance of multi-hop wireless networks.

  17. VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1

    DTIC Science & Technology

    1987-11-01

    synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time

  18. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  19. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  20. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  1. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  2. A meta-heuristic method for solving scheduling problem: crow search algorithm

    NASA Astrophysics Data System (ADS)

    Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi

    2018-04-01

    Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.

  3. Prioritized packet video transmission over time-varying wireless channel using proactive FEC

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-12-01

    Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.

  4. OSI Network-layer Abstraction: Analysis of Simulation Dynamics and Performance Indicators

    NASA Astrophysics Data System (ADS)

    Lawniczak, Anna T.; Gerisch, Alf; Di Stefano, Bruno

    2005-06-01

    The Open Systems Interconnection (OSI) reference model provides a conceptual framework for communication among computers in a data communication network. The Network Layer of this model is responsible for the routing and forwarding of packets of data. We investigate the OSI Network Layer and develop an abstraction suitable for the study of various network performance indicators, e.g. throughput, average packet delay, average packet speed, average packet path-length, etc. We investigate how the network dynamics and the network performance indicators are affected by various routing algorithms and by the addition of randomly generated links into a regular network connection topology of fixed size. We observe that the network dynamics is not simply the sum of effects resulting from adding individual links to the connection topology but rather is governed nonlinearly by the complex interactions caused by the existence of all randomly added and already existing links in the network. Data for our study was gathered using Netzwerk-1, a C++ simulation tool that we developed for our abstraction.

  5. Proportional fair scheduling algorithm based on traffic in satellite communication system

    NASA Astrophysics Data System (ADS)

    Pan, Cheng-Sheng; Sui, Shi-Long; Liu, Chun-ling; Shi, Yu-Xin

    2018-02-01

    In the satellite communication network system, in order to solve the problem of low system capacity and user fairness in multi-user access to satellite communication network in the downlink, combined with the characteristics of user data service, an algorithm study on throughput capacity and user fairness scheduling is proposed - Proportional Fairness Algorithm Based on Traffic(B-PF). The algorithm is improved on the basis of the proportional fairness algorithm in the wireless communication system, taking into account the user channel condition and caching traffic information. The user outgoing traffic is considered as the adjustment factor of the scheduling priority and presents the concept of traffic satisfaction. Firstly,the algorithm calculates the priority of the user according to the scheduling algorithm and dispatches the users with the highest priority. Secondly, when a scheduled user is the business satisfied user, the system dispatches the next priority user. The simulation results show that compared with the PF algorithm, B-PF can improve the system throughput, the business satisfaction and fairness.

  6. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    NASA Astrophysics Data System (ADS)

    Tsow, Alex

    A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.

  7. Non preemptive soft real time scheduler: High deadline meeting rate on overload

    NASA Astrophysics Data System (ADS)

    Khalib, Zahereel Ishwar Abdul; Ahmad, R. Badlishah; El-Shaikh, Mohamed

    2015-05-01

    While preemptive scheduling has gain more attention among researchers, current work in non preemptive scheduling had shown promising result in soft real time jobs scheduling. In this paper we present a non preemptive scheduling algorithm meant for soft real time applications, which is capable of producing better performance during overload while maintaining excellent performance during normal load. The approach taken by this algorithm has shown more promising results compared to other algorithms including its immediate predecessor. We will present the analysis made prior to inception of the algorithm as well as simulation results comparing our algorithm named gutEDF with EDF and gEDF. We are convinced that grouping jobs utilizing pure dynamic parameters would produce better performance.

  8. A new task scheduling algorithm based on value and time for cloud platform

    NASA Astrophysics Data System (ADS)

    Kuang, Ling; Zhang, Lichen

    2017-08-01

    Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its characters and advantages.

  9. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks.

    PubMed

    Khan, Anwar; Ahmedy, Ismail; Anisi, Mohammad Hossein; Javaid, Nadeem; Ali, Ihsan; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-09

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay.

  10. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  11. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks

    PubMed Central

    Khan, Anwar; Anisi, Mohammad Hossein; Javaid, Nadeem; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-01

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay. PMID:29315247

  12. Comparison of OPC job prioritization schemes to generate data for mask manufacturing

    NASA Astrophysics Data System (ADS)

    Lewis, Travis; Veeraraghavan, Vijay; Jantzen, Kenneth; Kim, Stephen; Park, Minyoung; Russell, Gordon; Simmons, Mark

    2015-03-01

    Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.

  13. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  14. Binary Trees and Parallel Scheduling Algorithms.

    DTIC Science & Technology

    1980-09-01

    been pro- cessed for p. time units. If a job does not complete by its due time, it is tardy. In a nonpreemptive schedule, job i is scheduled to process...the preemptive schedule obtained by the algorithm of section 2.1.2 also minimizes 5Ti, this problem is easily solved in parallel. When lci is to e...August 1978, pp. 657-661. 14. Horn, W. A., "Some simple scheduling algorithms," Naval Res. Logist . Qur., Vol. 21, pp. 177-185, 1974. i5. Hforowitz, E

  15. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.

    PubMed

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-07-08

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  16. 76 FR 3184 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    ...); (ii) establish a VIX Tier Appointment; (iii) amend the monthly fee for Floor Broker Trading Permits... demutualization, CBOE amended its Fees Schedule to establish Trading Permit, tier appointment and bandwidth packet... permit ($) 1 permit 10 permits 6,000 Tier 1 11 permits 20 permits 4,800 Tier 2 21 or more permits...

  17. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  18. An Emergency Packet Forwarding Scheme for V2V Communication Networks

    PubMed Central

    2014-01-01

    This paper proposes an effective warning message forwarding scheme for cooperative collision avoidance. In an emergency situation, an emergency-detecting vehicle warns the neighbor vehicles via an emergency warning message. Since the transmission range is limited, the warning message is broadcast in a multihop manner. Broadcast packets lead two challenges to forward the warning message in the vehicular network: redundancy of warning messages and competition with nonemergency transmissions. In this paper, we study and address the two major challenges to achieve low latency in delivery of the warning message. To reduce the intervehicle latency and end-to-end latency, which cause chain collisions, we propose a two-way intelligent broadcasting method with an adaptable distance-dependent backoff algorithm. Considering locations of vehicles, the proposed algorithm controls the broadcast of a warning message to reduce redundant EWM messages and adaptively chooses the contention window to compete with nonemergency transmission. Via simulations, we show that our proposed algorithm reduces the probability of rear-end crashes by 70% compared to previous algorithms by reducing the intervehicle delay. We also show that the end-to-end propagation delay of the warning message is reduced by 55%. PMID:25054181

  19. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less

  20. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.

  1. Experimental energy consumption of Frame Slotted ALOHA and Distributed Queuing for data collection scenarios.

    PubMed

    Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier

    2014-07-24

    Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  2. Fuzzy Logic-based Intelligent Scheme for Enhancing QoS of Vertical Handover Decision in Vehicular Ad-hoc Networks

    NASA Astrophysics Data System (ADS)

    Azzali, F.; Ghazali, O.; Omar, M. H.

    2017-08-01

    The design of next generation networks in various technologies under the “Anywhere, Anytime” paradigm offers seamless connectivity across different coverage. A conventional algorithm such as RSSThreshold algorithm, that only uses the received strength signal (RSS) as a metric, will decrease handover performance regarding handover latency, delay, packet loss, and handover failure probability. Moreover, the RSS-based algorithm is only suitable for horizontal handover decision to examine the quality of service (QoS) compared to the vertical handover decision in advanced technologies. In the next generation network, vertical handover can be started based on the user’s convenience or choice rather than connectivity reasons. This study proposes a vertical handover decision algorithm that uses a Fuzzy Logic (FL) algorithm, to increase QoS performance in heterogeneous vehicular ad-hoc networks (VANET). The study uses network simulator 2.29 (NS 2.29) along with the mobility traffic network and generator to implement simulation scenarios and topologies. This helps the simulation to achieve a realistic VANET mobility scenario. The required analysis on the performance of QoS in the vertical handover can thus be conducted. The proposed Fuzzy Logic algorithm shows improvement over the conventional algorithm (RSSThreshold) in the average percentage of handover QoS whereby it achieves 20%, 21% and 13% improvement on handover latency, delay, and packet loss respectively. This is achieved through triggering a process in layer two and three that enhances the handover performance.

  3. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.

  4. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  5. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James L.

    2010-01-01

    NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.

  6. A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ida, Kenichi; Osawa, Akira

    In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.

  7. Cellular-V2X Communications for Platooning: Design and Evaluation

    PubMed Central

    2018-01-01

    Platooning is a cooperative driving application where autonomous/semi-autonomous vehicles move on the same lane in a train-like manner, keeping a small constant inter-vehicle distance, in order to reduce fuel consumption and gas emissions and to achieve safe and efficient transport. To this aim, they may exploit multiple on-board sensors (e.g., radars, LiDARs, positioning systems) and direct vehicle-to-vehicle communications to synchronize their manoeuvres. The main objective of this paper is to discuss the design choices and factors that determine the performance of a platooning application, when exploiting the emerging cellular vehicle-to-everything (C-V2X) communication technology and considering the scheduled mode, specified by 3GPP for communications over the sidelink assisted by the eNodeB. Since no resource management algorithm is currently mandated by 3GPP for this new challenging context, we focus on analyzing the feasibility and performance of the dynamic scheduling approach, with platoon members asking for radio resources on a per-packet basis. We consider two ways of implementing dynamic scheduling, currently unspecified by 3GPP: the sequential mode, that is somehow reminiscent of time division multiple access solutions based on IEEE 802.11p—till now the only investigated access technology for platooning—and the simultaneous mode with spatial frequency reuse enabled by the eNodeB. The evaluation conducted through system-level simulations provides helpful insights about the proposed configurations and C-V2X parameter settings that mainly affect the reliability and latency performance of data exchange in platoons, under different load settings. Achieved results show that the proposed simultaneous mode succeeds in reducing the latency in the update cycle in each vehicle’s controller, thus enabling future high-density platooning scenarios. PMID:29751690

  8. Cellular-V2X Communications for Platooning: Design and Evaluation.

    PubMed

    Nardini, Giovanni; Virdis, Antonio; Campolo, Claudia; Molinaro, Antonella; Stea, Giovanni

    2018-05-11

    Platooning is a cooperative driving application where autonomous/semi-autonomous vehicles move on the same lane in a train-like manner, keeping a small constant inter-vehicle distance, in order to reduce fuel consumption and gas emissions and to achieve safe and efficient transport. To this aim, they may exploit multiple on-board sensors (e.g., radars, LiDARs, positioning systems) and direct vehicle-to-vehicle communications to synchronize their manoeuvres. The main objective of this paper is to discuss the design choices and factors that determine the performance of a platooning application, when exploiting the emerging cellular vehicle-to-everything (C-V2X) communication technology and considering the scheduled mode, specified by 3GPP for communications over the sidelink assisted by the eNodeB. Since no resource management algorithm is currently mandated by 3GPP for this new challenging context, we focus on analyzing the feasibility and performance of the dynamic scheduling approach, with platoon members asking for radio resources on a per-packet basis. We consider two ways of implementing dynamic scheduling, currently unspecified by 3GPP: the sequential mode, that is somehow reminiscent of time division multiple access solutions based on IEEE 802.11p-till now the only investigated access technology for platooning-and the simultaneous mode with spatial frequency reuse enabled by the eNodeB. The evaluation conducted through system-level simulations provides helpful insights about the proposed configurations and C-V2X parameter settings that mainly affect the reliability and latency performance of data exchange in platoons, under different load settings. Achieved results show that the proposed simultaneous mode succeeds in reducing the latency in the update cycle in each vehicle's controller, thus enabling future high-density platooning scenarios.

  9. An algorithm for a single machine scheduling problem with sequence dependent setup times and scheduling windows

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1975-01-01

    An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.

  10. An Algorithm for Automatically Modifying Train Crew Schedule

    NASA Astrophysics Data System (ADS)

    Takahashi, Satoru; Kataoka, Kenji; Kojima, Teruhito; Asami, Masayuki

    Once the break-down of the train schedule occurs, the crew schedule as well as the train schedule has to be modified as quickly as possible to restore them. In this paper, we propose an algorithm for automatically modifying a crew schedule that takes all constraints into consideration, presenting a model of the combined problem of crews and trains. The proposed algorithm builds an initial solution by relaxing some of the constraint conditions, and then uses a Taboo-search method to revise this solution in order to minimize the degree of constraint violation resulting from these relaxed conditions. Then we show not only that the algorithm can generate a constraint satisfaction solution, but also that the solution will satisfy the experts. That is, we show the proposed algorithm is capable of producing a usable solution in a short time by applying to actual cases of train-schedule break-down, and that the solution is at least as good as those produced manually, by comparing the both solutions with several point of view.

  11. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    PubMed Central

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-01-01

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722

  12. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less

  13. A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy

    PubMed Central

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742

  14. A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.

    PubMed

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.

  15. OGUPSA sensor scheduling architecture and algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Zhixiong; Hintz, Kenneth J.

    1996-06-01

    This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.

  16. Touch the Future: Discovering Abilities through Technology for Living, Learning, Working and Playing. Southeast Regional Conference (3rd, Atlanta, Georgia, April 10-12, 1991).

    ERIC Educational Resources Information Center

    Georgia State Dept. of Human Resources, Atlanta. Div. of Rehabilitation Services.

    This packet of materials was originally intended for participants in a 1991 conference on assistive technology for the disabled. After a detailed listing of the conference schedule, individual sections provide abstracts, biographical sketches, and summaries concerning the following conference topics: blending, computer labs, family, grants and…

  17. 78 FR 25500 - Options Price Reporting Authority; Notice of Filing and Immediate Effectiveness of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-01

    ... chain. \\4\\ The term ``quote packet'' is defined in footnote 6 to OPRA's Fee Schedule as consisting of..., bid/ask and related market data. The term ``options chain'' is also defined in footnote 6 to OPRA's... essence, an OPRA Subscriber may obtain access to OPRA data in one of two ways: Either by signing a...

  18. Practices & Procedures of Mason Tending I & II. Instructor Manual. Trainee Manual.

    ERIC Educational Resources Information Center

    Laborers-AGC Education and Training Fund, Pomfret Center, CT.

    This packet consists of the instructor and trainee manuals for two courses: practices and procedures of mason tending I and II. The instructor manual for mason tending I contains a schedule for a 40-hour, 5-day course and instructor outline. The outline provides a step-by-step description of the instructor's activities and includes answer sheets…

  19. Strategies for Optimal MAC Parameters Tuning in IEEE 802.15.6 Wearable Wireless Sensor Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes

    2015-09-01

    Wireless body area networks (WBAN) has penetrated immensely in revolutionizing the classical heath-care system. Recently, number of WBAN applications has emerged which introduce potential limits to existing solutions. In particular, IEEE 802.15.6 standard has provided great flexibility, provisions and capabilities to deal emerging applications. In this paper, we investigate the application-specific throughput analysis by fine-tuning the physical (PHY) and medium access control (MAC) parameters of the IEEE 802.15.6 standard. Based on PHY characterizations in narrow band, at the MAC layer, carrier sense multiple access collision avoidance (CSMA/CA) and scheduled access protocols are extensively analyzed. It is concluded that, IEEE 802.15.6 standard can satisfy most of the WBANs applications throughput requirements by maximum achieving 680 Kbps. However, those emerging applications which require high quality audio or video transmissions, standard is not able to meet their constraints. Moreover, delay, energy efficiency and successful packet reception are considered as key performance metrics for comparing the MAC protocols. CSMA/CA protocol provides the best results to meet the delay constraints of medical and non-medical WBAN applications. Whereas, the scheduled access approach, performs very well both in energy efficiency and packet reception ratio.

  20. A Random Access Algorithm for Frequency Hopped Spread Spectrum Packet Radio Networks

    DTIC Science & Technology

    1986-03-01

    in section 4.3, the probability p is given by the following expression. N rj 1-p - N1 (NR7-1). f -• (20)~ j=o j. F,=ji < N We now present a...Algorithms," Univ. of Connecticut, EECS Dept. Technical Report UCT/ DEECS /TR-86-1, January 1986. Also, submitted for publication. 119] W. Peterson and E

  1. Unified Method for Delay Analysis of Random Multiple Access Algorithms.

    DTIC Science & Technology

    1985-08-01

    packets in the first cell of the stack. The rules of the algorithm yield the following relation for the wi’s: n-1 n w 0= ; W =1; i i 9Q h I+ + zwI .+N...for computer communica- tions", in Proc. 1970 Fall Joint Computer Conf., AFIPS Press, vol. 37, 1970, pp. 281 -285. (15] N. D. Vvedenskaya and B. S

  2. DTN routing in body sensor networks with dynamic postural partitioning.

    PubMed

    Quwaider, Muhannad; Biswas, Subir

    2010-11-01

    This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks ( WBAN ) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.

  3. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  4. Scheduling in Sensor Grid Middleware for Telemedicine Using ABC Algorithm

    PubMed Central

    Vigneswari, T.; Mohamed, M. A. Maluk

    2014-01-01

    Advances in microelectromechanical systems (MEMS) and nanotechnology have enabled design of low power wireless sensor nodes capable of sensing different vital signs in our body. These nodes can communicate with each other to aggregate data and transmit vital parameters to a base station (BS). The data collected in the base station can be used to monitor health in real time. The patient wearing sensors may be mobile leading to aggregation of data from different BS for processing. Processing real time data is compute-intensive and telemedicine facilities may not have appropriate hardware to process the real time data effectively. To overcome this, sensor grid has been proposed in literature wherein sensor data is integrated to the grid for processing. This work proposes a scheduling algorithm to efficiently process telemedicine data in the grid. The proposed algorithm uses the popular swarm intelligence algorithm for scheduling to overcome the NP complete problem of grid scheduling. Results compared with other heuristic scheduling algorithms show the effectiveness of the proposed algorithm. PMID:25548557

  5. Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.

    PubMed

    Cheng, Ching-Hsue; Liu, Wei-Xiang

    2018-05-28

    Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.

  6. ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms

    NASA Astrophysics Data System (ADS)

    Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François

    2015-10-01

    Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.

  7. A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times

    NASA Astrophysics Data System (ADS)

    Li, Xin; Fung, Richard Y. K.

    2018-02-01

    This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.

  8. Scheduling logic for Miles-In-Trail traffic management

    NASA Technical Reports Server (NTRS)

    Synnestvedt, Robert G.; Swenson, Harry; Erzberger, Heinz

    1995-01-01

    This paper presents an algorithm which can be used for scheduling arrival air traffic in an Air Route Traffic Control Center (ARTCC or Center) entering a Terminal Radar Approach Control (TRACON) Facility . The algorithm aids a Traffic Management Coordinator (TMC) in deciding how to restrict traffic while the traffic expected to arrive in the TRACON exceeds the TRACON capacity. The restrictions employed fall under the category of Miles-in-Trail, one of two principal traffic separation techniques used in scheduling arrival traffic . The algorithm calculates aircraft separations for each stream of aircraft destined to the TRACON. The calculations depend upon TRACON characteristics, TMC preferences, and other parameters adapted to the specific needs of scheduling traffic in a Center. Some preliminary results of traffic simulations scheduled by this algorithm are presented, and conclusions are drawn as to the effectiveness of using this algorithm in different traffic scenarios.

  9. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  10. Algorithm comparison for schedule optimization in MR fingerprinting.

    PubMed

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Compression of real time volumetric echocardiographic data using modified SPIHT based on the three-dimensional wavelet packet transform.

    PubMed

    Hang, X; Greenberg, N L; Shiota, T; Firstenberg, M S; Thomas, J D

    2000-01-01

    Real-time three-dimensional echocardiography has been introduced to provide improved quantification and description of cardiac function. Data compression is desired to allow efficient storage and improve data transmission. Previous work has suggested improved results utilizing wavelet transforms in the compression of medical data including 2D echocardiogram. Set partitioning in hierarchical trees (SPIHT) was extended to compress volumetric echocardiographic data by modifying the algorithm based on the three-dimensional wavelet packet transform. A compression ratio of at least 40:1 resulted in preserved image quality.

  12. Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.

    PubMed

    Jung, Young-Ho; Choi, Jihoon

    2017-02-25

    A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.

  13. Scheduling nursing personnel on a microcomputer.

    PubMed

    Liao, C J; Kao, C Y

    1997-01-01

    Suggests that with the shortage of nursing personnel, hospital administrators have to pay more attention to the needs of nurses to retain and recruit them. Also asserts that improving nurses' schedules is one of the most economic ways for the hospital administration to create a better working environment for nurses. Develops an algorithm for scheduling nursing personnel. Contrary to the current hospital approach, which schedules nurses on a person-by-person basis, the proposed algorithm constructs schedules on a day-by-day basis. The algorithm has inherent flexibility in handling a variety of possible constraints and goals, similar to other non-cyclical approaches. But, unlike most other non-cyclical approaches, it can also generate a quality schedule in a short time on a microcomputer. The algorithm was coded in C language and run on a microcomputer. The developed software is currently implemented at a leading hospital in Taiwan. The response to the initial implementation is quite promising.

  14. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  15. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    PubMed

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  16. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path

    PubMed Central

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  17. Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen

    2013-08-01

    Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.

  18. Routing and Scheduling Algorithms for WirelessHART Networks: A Survey

    PubMed Central

    Nobre, Marcelo; Silva, Ivanovitch; Guedes, Luiz Affonso

    2015-01-01

    Wireless communication is a trend nowadays for the industrial environment. A number of different technologies have emerged as solutions satisfying strict industrial requirements (e.g., WirelessHART, ISA100.11a, WIA-PA). As the industrial environment presents a vast range of applications, adopting an adequate solution for each case is vital to obtain good performance of the system. In this context, the routing and scheduling schemes associated with these technologies have a direct impact on important features, like latency and energy consumption. This situation has led to the development of a vast number of routing and scheduling schemes. In the present paper, we focus on the WirelessHART technology, emphasizing its most important routing and scheduling aspects in order to guide both end users and the developers of new algorithms. Furthermore, we provide a detailed literature review of the newest routing and scheduling techniques for WirelessHART, discussing each of their features. These routing algorithms have been evaluated in terms of their objectives, metrics, the usage of the WirelessHART structures and validation method. In addition, the scheduling algorithms were also evaluated by metrics, validation, objectives and, in addition, by multiple superframe support, as well as by the redundancy method used. Moreover, this paper briefly presents some insights into the main WirelessHART simulation modules available, in order to provide viable test platforms for the routing and scheduling algorithms. Finally, some open issues in WirelessHART routing and scheduling algorithms are discussed. PMID:25919371

  19. Job shop scheduling problem with late work criterion

    NASA Astrophysics Data System (ADS)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.

  20. Performance comparison between packet and continuous data transmission using two adaptive equalizers in shallow water

    NASA Astrophysics Data System (ADS)

    Yoon, Jong Rak; Park, Kyu-Chil; Park, Jihyun

    2015-07-01

    Transmitted signals are markedly affected by sea surface and bottom boundaries in shallow water. The time variant reflection signals from such boundaries characterize the channel as a frequency-selective fading channel and cause intersymbol interference (ISI) in underwater acoustic communication. A channel-estimate-based equalizer is usually adopted to compensate for the reflected signals under this kind of acoustic channel. In this study, we apply two approaches for packet and continuous data transmission of the quadrature phase shift keying (QPSK) system. One is the use of a two-dimensional (2D) rotation matrix in a non-frequency-selective channel. The other is the use of two equalizers of types — the feed forward equalizer (FFE) and decision-directed equalizer (DDE) — with a normalized least mean square (NLMS) algorithm in a frequency-selective channel. The percentage improvement of packet transmission is notably better than that of continuous transmission.

  1. Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry

    NASA Astrophysics Data System (ADS)

    Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael

    2015-05-01

    This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.

  2. An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks.

    PubMed

    Al Ameen, Moshaddique; Hong, Choong Seon

    2015-12-04

    The rapid developments of sensor devices that can actively monitor human activities have given rise to a new field called wireless body area network (BAN). A BAN can manage devices in, on and around the human body. Major requirements of such a network are energy efficiency, long lifetime, low delay, security, etc. Traffic in a BAN can be scheduled (normal) or event-driven (emergency). Traditional media access control (MAC) protocols use duty cycling to improve performance. A sleep-wake up cycle is employed to save energy. However, this mechanism lacks features to handle emergency traffic in a prompt and immediate manner. To deliver an emergency packet, a node has to wait until the receiver is awake. It also suffers from overheads, such as idle listening, overhearing and control packet handshakes. An external radio-triggered wake up mechanism is proposed to handle prompt communication. It can reduce the overheads and improve the performance through an on-demand scheme. In this work, we present a simple-to-implement on-demand packet transmission scheme by taking into considerations the requirements of a BAN. The major concern is handling the event-based emergency traffic. The performance analysis of the proposed scheme is presented. The results showed significant improvements in the overall performance of a BAN compared to state-of-the-art protocols in terms of energy consumption, delay and lifetime.

  3. NASA Johnson Space Center Life Sciences Data System

    NASA Technical Reports Server (NTRS)

    Rahman, Hasan; Cardenas, Jeffery

    1994-01-01

    The Life Sciences Project Division (LSPD) at JSC, which manages human life sciences flight experiments for the NASA Life Sciences Division, augmented its Life Sciences Data System (LSDS) in support of the Spacelab Life Sciences-2 (SLS-2) mission, October 1993. The LSDS is a portable ground system supporting Shuttle, Spacelab, and Mir based life sciences experiments. The LSDS supports acquisition, processing, display, and storage of real-time experiment telemetry in a workstation environment. The system may acquire digital or analog data, storing the data in experiment packet format. Data packets from any acquisition source are archived and meta-parameters are derived through the application of mathematical and logical operators. Parameters may be displayed in text and/or graphical form, or output to analog devices. Experiment data packets may be retransmitted through the network interface and database applications may be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control and the LSDS system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability, and ease of use make the LSDS a cost-effective solution to many experiment data processing requirements. The same system is used for experiment systems functional and integration tests, flight crew training sessions and mission simulations. In addition, the system has provided the infrastructure for the development of the JSC Life Sciences Data Archive System scheduled for completion in December 1994.

  4. An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks

    PubMed Central

    Al Ameen, Moshaddique; Hong, Choong Seon

    2015-01-01

    The rapid developments of sensor devices that can actively monitor human activities have given rise to a new field called wireless body area network (BAN). A BAN can manage devices in, on and around the human body. Major requirements of such a network are energy efficiency, long lifetime, low delay, security, etc. Traffic in a BAN can be scheduled (normal) or event-driven (emergency). Traditional media access control (MAC) protocols use duty cycling to improve performance. A sleep-wake up cycle is employed to save energy. However, this mechanism lacks features to handle emergency traffic in a prompt and immediate manner. To deliver an emergency packet, a node has to wait until the receiver is awake. It also suffers from overheads, such as idle listening, overhearing and control packet handshakes. An external radio-triggered wake up mechanism is proposed to handle prompt communication. It can reduce the overheads and improve the performance through an on-demand scheme. In this work, we present a simple-to-implement on-demand packet transmission scheme by taking into considerations the requirements of a BAN. The major concern is handling the event-based emergency traffic. The performance analysis of the proposed scheme is presented. The results showed significant improvements in the overall performance of a BAN compared to state-of-the-art protocols in terms of energy consumption, delay and lifetime. PMID:26690161

  5. Flight software issues in onboard automated planning: lessons learned on EO-1

    NASA Technical Reports Server (NTRS)

    Tran, Daniel; Chien, Steve; Rabideau, Gregg; Cichy, Benjamin

    2004-01-01

    This paper focuses on the onboard planner and scheduler CASPER, whose core planning engine is based on the ground system ASPEN. Given the challenges of developing flight software, we discuss several of the issues encountered in preparing the planner for flight, including reducing the code image size, determining what data to place within the engineering telemetry packet, and performing long term planning.

  6. An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling

    NASA Astrophysics Data System (ADS)

    Qiu, X. N.; Lau, H. Y. K.

    The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.

  7. Full Duplex, Spread Spectrum Radio System

    NASA Technical Reports Server (NTRS)

    Harvey, Bruce A.

    2000-01-01

    The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.

  8. Charge scheduling of an energy storage system under time-of-use pricing and a demand charge.

    PubMed

    Yoon, Yourim; Kim, Yong-Hyuk

    2014-01-01

    A real-coded genetic algorithm is used to schedule the charging of an energy storage system (ESS), operated in tandem with renewable power by an electricity consumer who is subject to time-of-use pricing and a demand charge. Simulations based on load and generation profiles of typical residential customers show that an ESS scheduled by our algorithm can reduce electricity costs by approximately 17%, compared to a system without an ESS and by 8% compared to a scheduling algorithm based on net power.

  9. Charge Scheduling of an Energy Storage System under Time-of-Use Pricing and a Demand Charge

    PubMed Central

    Yoon, Yourim

    2014-01-01

    A real-coded genetic algorithm is used to schedule the charging of an energy storage system (ESS), operated in tandem with renewable power by an electricity consumer who is subject to time-of-use pricing and a demand charge. Simulations based on load and generation profiles of typical residential customers show that an ESS scheduled by our algorithm can reduce electricity costs by approximately 17%, compared to a system without an ESS and by 8% compared to a scheduling algorithm based on net power. PMID:25197720

  10. Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network

    NASA Astrophysics Data System (ADS)

    Oda, Akihiro; Nishi, Hiroaki

    Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.

  11. Bandwidth characteristics of multimedia data traffic on a local area network

    NASA Technical Reports Server (NTRS)

    Chuang, Shery L.; Doubek, Sharon; Haines, Richard F.

    1993-01-01

    Limited spacecraft communication links call for users to investigate the potential use of video compression and multimedia technologies to optimize bandwidth allocations. The objective was to determine the transmission characteristics of multimedia data - motion video, text or bitmap graphics, and files transmitted independently and simultaneously over an ethernet local area network. Commercial desktop video teleconferencing hardware and software and Intel's proprietary Digital Video Interactive (DVI) video compression algorithm were used, and typical task scenarios were selected. The transmission time, packet size, number of packets, and network utilization of the data were recorded. Each data type - compressed motion video, text and/or bitmapped graphics, and a compressed image file - was first transmitted independently and its characteristics recorded. The results showed that an average bandwidth of 7.4 kilobits per second (kbps) was used to transmit graphics; an average bandwidth of 86.8 kbps was used to transmit an 18.9-kilobyte (kB) image file; a bandwidth of 728.9 kbps was used to transmit compressed motion video at 15 frames per second (fps); and a bandwidth of 75.9 kbps was used to transmit compressed motion video at 1.5 fps. Average packet sizes were 933 bytes for graphics, 498.5 bytes for the image file, 345.8 bytes for motion video at 15 fps, and 341.9 bytes for motion video at 1.5 fps. Simultaneous transmission of multimedia data types was also characterized. The multimedia packets used transmission bandwidths of 341.4 kbps and 105.8kbps. Bandwidth utilization varied according to the frame rate (frames per second) setting for the transmission of motion video. Packet size did not vary significantly between the data types. When these characteristics are applied to Space Station Freedom (SSF), the packet sizes fall within the maximum specified by the Consultative Committee for Space Data Systems (CCSDS). The uplink of imagery to SSF may be performed at minimal frame rates and/or within seconds of delay, depending on the user's allocated bandwidth. Further research to identify the acceptable delay interval and its impact on human performance is required. Additional studies in network performance using various video compression algorithms and integrated multimedia techniques are needed to determine the optimal design approach for utilizing SSF's data communications system.

  12. Algorithm of composing the schedule of construction and installation works

    NASA Astrophysics Data System (ADS)

    Nehaj, Rustam; Molotkov, Georgij; Rudchenko, Ivan; Grinev, Anatolij; Sekisov, Aleksandr

    2017-10-01

    An algorithm for scheduling works is developed, in which the priority of the work corresponds to the total weight of the subordinate works, the vertices of the graph, and it is proved that for graphs of the tree type the algorithm is optimal. An algorithm is synthesized to reduce the search for solutions when drawing up schedules of construction and installation works, allocating a subset with the optimal solution of the problem of the minimum power, which is determined by the structure of its initial data and numerical values. An algorithm for scheduling construction and installation work is developed, taking into account the schedule for the movement of brigades, which is characterized by the possibility to efficiently calculate the values of minimizing the time of work performance by the parameters of organizational and technological reliability through the use of the branch and boundary method. The program of the computational algorithm was compiled in the MatLAB-2008 program. For the initial data of the matrix, random numbers were taken, uniformly distributed in the range from 1 to 100. It takes 0.5; 2.5; 7.5; 27 minutes to solve the problem. Thus, the proposed method for estimating the lower boundary of the solution is sufficiently accurate and allows efficient solution of the minimax task of scheduling construction and installation works.

  13. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  14. Car painting process scheduling with harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.

  15. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  16. A dynamic routing strategy with limited buffer on scale-free network

    NASA Astrophysics Data System (ADS)

    Wang, Yufei; Liu, Feng

    2016-04-01

    In this paper, we propose an integrated routing strategy based on global static topology information and local dynamic data packet queue lengths to improve the transmission efficiency of scale-free networks. The proposed routing strategy is a combination of a global static routing strategy (based on the shortest path algorithm) and local dynamic queue length management, in which, instead of using an infinite buffer, the queue length of each node i in the proposed routing strategy is limited by a critical queue length Qic. When the network traffic is lower and the queue length of each node i is shorter than its critical queue length Qic, it forwards packets according to the global routing table. With increasing network traffic, when the buffers of the nodes with higher degree are full, they do not receive packets due to their limited buffers and the packets have to be delivered to the nodes with lower degree. The global static routing strategy can shorten the transmission time that it takes a packet to reach its destination, and the local limited queue length can balance the network traffic. The optimal critical queue lengths of nodes have been analysed. Simulation results show that the proposed routing strategy can get better performance than that of the global static strategy based on topology, and almost the same performance as that of the global dynamic routing strategy with less complexity.

  17. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    PubMed

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  18. Applying Dynamic Priority Scheduling Scheme to Static Systems of Pinwheel Task Model in Power-Aware Scheduling

    PubMed Central

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126

  19. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  20. Integrated DoD Voice and Data Networks and Ground Packet Radio Technology

    DTIC Science & Technology

    1976-08-01

    as the traffic requirement level increases. Moreover, the satellite switch selection problem is only meaningful over a limited traffic range. When...5: CPU TIMES VS. NUMBER OF SWITCHES SATELLITE SWITCH SELECTION ALGORITHM Computer Used: PDP-10 ♦O’S" means 0 minutes and 5 seconds. 5.30...Saturation Algorithm for Topo\\ogical Design of Parket-Switched Communications Networks," National Te3 ecommunications Conference Proceed- ings, San

  1. Advanced teleprocessing systems

    NASA Astrophysics Data System (ADS)

    Kleinrock, L.; Gerla, M.

    1982-09-01

    This Annual Technical Report covers research covering the period from October 1, 1981 to September 30, 1982. This contract has three primary designated research areas: packet radio systems, resource sharing and allocation, and distributed processing and control. This report contains abstracts of publications which summarize research results in these areas followed by the main body of the report which is devoted to a study of channel access protocols that are executed by the nodes of a network to schedule their transmissions on multi-access broadcast channel. In particular the main body consists of a Ph.D. dissertation, Channel Access Protocols for Multi-Hop Broadcast Packet Radio Networks. This work discusses some new channel access protocols useful for mobile radio networks. Included is an analysis of slotted ALOHA and some tight bounds on the performance of all possible protocols in a mobile environment.

  2. Estimating mental fatigue based on electroencephalogram and heart rate variability

    NASA Astrophysics Data System (ADS)

    Zhang, Chong; Yu, Xiaolin

    2010-01-01

    The effects of long term mental arithmetic task on psychology are investigated by subjective self-reporting measures and action performance test. Based on electroencephalogram (EEG) and heart rate variability (HRV), the impacts of prolonged cognitive activity on central nervous system and autonomic nervous system are observed and analyzed. Wavelet packet parameters of EEG and power spectral indices of HRV are combined to estimate the change of mental fatigue. Then wavelet packet parameters of EEG which change significantly are extracted as the features of brain activity in different mental fatigue state, support vector machine (SVM) algorithm is applied to differentiate two mental fatigue states. The experimental results show that long term mental arithmetic task induces the mental fatigue. The wavelet packet parameters of EEG and power spectral indices of HRV are strongly correlated with mental fatigue. The predominant activity of autonomic nervous system of subjects turns to the sympathetic activity from parasympathetic activity after the task. Moreover, the slow waves of EEG increase, the fast waves of EEG and the degree of disorder of brain decrease compared with the pre-task. The SVM algorithm can effectively differentiate two mental fatigue states, which achieves the maximum classification accuracy (91%). The SVM algorithm could be a promising tool for the evaluation of mental fatigue. Fatigue, especially mental fatigue, is a common phenomenon in modern life, is a persistent occupational hazard for professional. Mental fatigue is usually accompanied with a sense of weariness, reduced alertness, and reduced mental performance, which would lead the accidents in life, decrease productivity in workplace and harm the health. Therefore, the evaluation of mental fatigue is important for the occupational risk protection, productivity, and occupational health.

  3. Integrated scheduling of a container handling system with simultaneous loading and discharging operations

    NASA Astrophysics Data System (ADS)

    Li, Chen; Lu, Zhiqiang; Han, Xiaole; Zhang, Yuejun; Wang, Li

    2016-03-01

    The integrated scheduling of container handling systems aims to optimize the coordination and overall utilization of all handling equipment, so as to minimize the makespan of a given set of container tasks. A modified disjunctive graph is proposed and a mixed 0-1 programming model is formulated. A heuristic algorithm is presented, in which the original problem is divided into two subproblems. In the first subproblem, contiguous bay crane operations are applied to obtain a good quay crane schedule. In the second subproblem, proper internal truck and yard crane schedules are generated to match the given quay crane schedule. Furthermore, a genetic algorithm based on the heuristic algorithm is developed to search for better solutions. The computational results show that the proposed algorithm can efficiently find high-quality solutions. They also indicate the effectiveness of simultaneous loading and discharging operations compared with separate ones.

  4. A controlled genetic algorithm by fuzzy logic and belief functions for job-shop scheduling.

    PubMed

    Hajri, S; Liouane, N; Hammadi, S; Borne, P

    2000-01-01

    Most scheduling problems are highly complex combinatorial problems. However, stochastic methods such as genetic algorithm yield good solutions. In this paper, we present a controlled genetic algorithm (CGA) based on fuzzy logic and belief functions to solve job-shop scheduling problems. For better performance, we propose an efficient representational scheme, heuristic rules for creating the initial population, and a new methodology for mixing and computing genetic operator probabilities.

  5. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  6. Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.

  7. Direct trust-based security scheme for RREQ flooding attack in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Kumar, Sunil; Dutta, Kamlesh

    2017-06-01

    The routing algorithms in MANETs exhibit distributed and cooperative behaviour which makes them easy target for denial of service (DoS) attacks. RREQ flooding attack is a flooding-type DoS attack in context to Ad hoc On Demand Distance Vector (AODV) routing protocol, where the attacker broadcasts massive amount of bogus Route Request (RREQ) packets to set up the route with the non-existent or existent destination in the network. This paper presents direct trust-based security scheme to detect and mitigate the impact of RREQ flooding attack on the network, in which, every node evaluates the trust degree value of its neighbours through analysing the frequency of RREQ packets originated by them over a short period of time. Taking the node's trust degree value as the input, the proposed scheme is smoothly extended for suppressing the surplus RREQ and bogus RREQ flooding packets at one-hop neighbours during the route discovery process. This scheme distinguishes itself from existing techniques by not directly blocking the service of a normal node due to increased amount of RREQ packets in some unusual conditions. The results obtained throughout the simulation experiments clearly show the feasibility and effectiveness of the proposed defensive scheme.

  8. Collaborative Distributed Scheduling Approaches for Wireless Sensor Network

    PubMed Central

    Niu, Jianjun; Deng, Zhidong

    2009-01-01

    Energy constraints restrict the lifetime of wireless sensor networks (WSNs) with battery-powered nodes, which poses great challenges for their large scale application. In this paper, we propose a family of collaborative distributed scheduling approaches (CDSAs) based on the Markov process to reduce the energy consumption of a WSN. The family of CDSAs comprises of two approaches: a one-step collaborative distributed approach and a two-step collaborative distributed approach. The approaches enable nodes to learn the behavior information of its environment collaboratively and integrate sleep scheduling with transmission scheduling to reduce the energy consumption. We analyze the adaptability and practicality features of the CDSAs. The simulation results show that the two proposed approaches can effectively reduce nodes' energy consumption. Some other characteristics of the CDSAs like buffer occupation and packet delay are also analyzed in this paper. We evaluate CDSAs extensively on a 15-node WSN testbed. The test results show that the CDSAs conserve the energy effectively and are feasible for real WSNs. PMID:22408491

  9. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  10. A scheduling algorithm for Spacelab telescope observations

    NASA Technical Reports Server (NTRS)

    Grone, B.

    1982-01-01

    An algorithm is developed for sequencing and scheduling of observations of stellar targets by equipment on Spacelab. The method is a general one. The scheduling problem is defined and examined. The method developed for its solution is documented. Suggestions for further development and implementation of this method are made.

  11. XML Tactical Chat (XTC): The Way Ahead for Navy Chat

    DTIC Science & Technology

    2007-09-01

    multicast transmissions via sophisticated pruning algorithms, while allowing multicast packets to “ tunnel ” through IP routers. [Macedonia, Brutzman 1994...conference was Jabber Inc. who added some great insight into the power of Jabber. • Great features including blackberry handheld connectivity and

  12. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.

    PubMed

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-23

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.

  13. A network monitor for HTTPS protocol based on proxy

    NASA Astrophysics Data System (ADS)

    Liu, Yangxin; Zhang, Lingcui; Zhou, Shuguang; Li, Fenghua

    2016-10-01

    With the explosive growth of harmful Internet information such as pornography, violence, and hate messages, network monitoring is essential. Traditional network monitors is based mainly on bypass monitoring. However, we can't filter network traffic using bypass monitoring. Meanwhile, only few studies focus on the network monitoring for HTTPS protocol. That is because HTTPS data is in the encrypted traffic, which makes it difficult to monitor. This paper proposes a network monitor for HTTPS protocol based on proxy. We adopt OpenSSL to establish TLS secure tunes between clients and servers. Epoll is used to handle a large number of concurrent client connections. We also adopt Knuth- Morris-Pratt string searching algorithm (or KMP algorithm) to speed up the search process. Besides, we modify request packets to reduce the risk of errors and modify response packets to improve security. Experiments show that our proxy can monitor the content of all tested HTTPS websites efficiently with little loss of network performance.

  14. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  15. A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  16. A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  17. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  18. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  19. Interactivity vs. fairness in networked linux systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wenji; Crawford, Matt; /Fermilab

    In general, the Linux 2.6 scheduler can ensure fairness and provide excellent interactive performance at the same time. However, our experiments and mathematical analysis have shown that the current Linux interactivity mechanism tends to incorrectly categorize non-interactive network applications as interactive, which can lead to serious fairness or starvation issues. In the extreme, a single process can unjustifiably obtain up to 95% of the CPU! The root cause is due to the facts that: (1) network packets arrive at the receiver independently and discretely, and the 'relatively fast' non-interactive network process might frequently sleep to wait for packet arrival. Thoughmore » each sleep lasts for a very short period of time, the wait-for-packet sleeps occur so frequently that they lead to interactive status for the process. (2) The current Linux interactivity mechanism provides the possibility that a non-interactive network process could receive a high CPU share, and at the same time be incorrectly categorized as 'interactive.' In this paper, we propose and test a possible solution to address the interactivity vs. fairness problems. Experiment results have proved the effectiveness of the proposed solution.« less

  20. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  1. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  2. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  3. Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Craig, Sam; While, Lyndon; Barone, Luigi

    We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.

  4. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  5. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  6. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  7. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  8. Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning

    NASA Astrophysics Data System (ADS)

    Schumacher, André; Haanpää, Harri

    We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with ns2 simulations.

  9. Analysis of sequencing and scheduling methods for arrival traffic

    NASA Technical Reports Server (NTRS)

    Neuman, Frank; Erzberger, Heinz

    1990-01-01

    The air traffic control subsystem that performs scheduling is discussed. The function of the scheduling algorithms is to plan automatically the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several important scheduling algorithms are described and the statistical performance of the scheduling algorithms is examined. Scheduling brings order to an arrival sequence for aircraft. First-come-first-served scheduling (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the traffic, gaps will remain in the scheduled sequence of aircraft. These gaps are filled, or partially filled, by time-advancing the leading aircraft after a gap while still preserving the FCFS order. Tightly scheduled groups of aircraft remain with a mix of heavy and large aircraft. Separation requirements differ for different types of aircraft trailing each other. Advantage is taken of this fact through mild reordering of the traffic, thus shortening the groups and reducing average delays. Actual delays for different samples with the same statistical parameters vary widely, especially for heavy traffic.

  10. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  11. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    NASA Astrophysics Data System (ADS)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  12. A new clustering strategy

    NASA Astrophysics Data System (ADS)

    Feng, Jian-xin; Tang, Jia-fu; Wang, Guang-xing

    2007-04-01

    On the basis of the analysis of clustering algorithm that had been proposed for MANET, a novel clustering strategy was proposed in this paper. With the trust defined by statistical hypothesis in probability theory and the cluster head selected by node trust and node mobility, this strategy can realize the function of the malicious nodes detection which was neglected by other clustering algorithms and overcome the deficiency of being incapable of implementing the relative mobility metric of corresponding nodes in the MOBIC algorithm caused by the fact that the receiving power of two consecutive HELLO packet cannot be measured. It's an effective solution to cluster MANET securely.

  13. Evaluation of Swift Start TCP in Long-Delay Environment

    NASA Technical Reports Server (NTRS)

    Lawas-Grodek, Frances J.; Tran, Diepchi T.

    2004-01-01

    This report presents the test results of the Swift Start algorithm in single-flow and multiple-flow testbeds under the effects of high propagation delays, various slow bottlenecks, and small queue sizes. Although this algorithm estimates capacity and implements packet pacing, the findings were that in a heavily congested link, the Swift Start algorithm will not be applicable. The reason is that the bottleneck estimation is falsely influenced by timeouts induced by retransmissions and the expiration of delayed acknowledgment (ACK) timers, thus causing the modified Swift Start code to fall back to regular transmission control protocol (TCP).

  14. Anchorage Arrival Scheduling Under Off-Nominal Weather Conditions

    NASA Technical Reports Server (NTRS)

    Grabbe, Shon; Chan, William N.; Mukherjee, Avijit

    2012-01-01

    Weather can cause flight diversions, passenger delays, additional fuel consumption and schedule disruptions at any high volume airport. The impacts are particularly acute at the Ted Stevens Anchorage International Airport in Anchorage, Alaska due to its importance as a major international portal. To minimize the impacts due to weather, a multi-stage scheduling process is employed that is iteratively executed, as updated aircraft demand and/or airport capacity data become available. The strategic scheduling algorithm assigns speed adjustments for flights that originate outside of Anchorage Center to achieve the proper demand and capacity balance. Similarly, an internal departure-scheduling algorithm assigns ground holds for pre-departure flights that originate from within Anchorage Center. Tactical flight controls in the form of airborne holding are employed to reactively account for system uncertainties. Real-world scenarios that were derived from the January 16, 2012 Anchorage visibility observations and the January 12, 2012 Anchorage arrival schedule were used to test the initial implementation of the scheduling algorithm in fast-time simulation experiments. Although over 90% of the flights in the scenarios arrived at Anchorage without requiring any delay, pre-departure scheduling was the dominant form of control for Anchorage arrivals. Additionally, tactical scheduling was used extensively in conjunction with the pre-departure scheduling to reactively compensate for uncertainties in the arrival demand. For long-haul flights, the strategic scheduling algorithm performed best when the scheduling horizon was greater than 1,000 nmi. With these long scheduling horizons, it was possible to absorb between ten and 12 minutes of delay through speed control alone. Unfortunately, the use of tactical scheduling, which resulted in airborne holding, was found to increase as the strategic scheduling horizon increased because of the additional uncertainty in the arrival times of the aircraft. Findings from these initial experiments indicate that it is possible to schedule arrivals into Anchorage with minimal delays under low-visibility conditions with less disruption to high-cost, international flights.

  15. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    PubMed

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  16. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  17. Pithy Review on Routing Protocols in Wireless Sensor Networks and Least Routing Time Opportunistic Technique in WSN

    NASA Astrophysics Data System (ADS)

    Salman Arafath, Mohammed; Rahman Khan, Khaleel Ur; Sunitha, K. V. N.

    2018-01-01

    Nowadays due to most of the telecommunication standard development organizations focusing on using device-to-device communication so that they can provide proximity-based services and add-on services on top of the available cellular infrastructure. An Oppnets and wireless sensor network play a prominent role here. Routing in these networks plays a significant role in fields such as traffic management, packet delivery etc. Routing is a prodigious research area with diverse unresolved issues. This paper firstly focuses on the importance of Opportunistic routing and its concept then focus is shifted to prime aspect i.e. on packet reception ratio which is one of the highest QoS Awareness parameters. This paper discusses the two important functions of routing in wireless sensor networks (WSN) namely route selection using least routing time algorithm (LRTA) and data forwarding using clustering technique. Finally, the simulation result reveals that LRTA performs relatively better than the existing system in terms of average packet reception ratio and connectivity.

  18. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm

    PubMed Central

    Lu, Guangquan; Xiong, Ying; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732

  19. WavePacket: A Matlab package for numerical quantum dynamics.II: Open quantum systems, optimal control, and model reduction

    NASA Astrophysics Data System (ADS)

    Schmidt, Burkhard; Hartmann, Carsten

    2018-07-01

    WavePacket is an open-source program package for numeric simulations in quantum dynamics. It can solve time-independent or time-dependent linear Schrödinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows, e.g., to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semi-classical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. Being highly versatile and offering visualization of quantum dynamics 'on the fly', WavePacket is well suited for teaching or research projects in atomic, molecular and optical physics as well as in physical or theoretical chemistry. Building on the previous Part I [Comp. Phys. Comm. 213, 223-234 (2017)] which dealt with closed quantum systems and discrete variable representations, the present Part II focuses on the dynamics of open quantum systems, with Lindblad operators modeling dissipation and dephasing. This part also describes the WavePacket function for optimal control of quantum dynamics, building on rapid monotonically convergent iteration methods. Furthermore, two different approaches to dimension reduction implemented in WavePacket are documented here. In the first one, a balancing transformation based on the concepts of controllability and observability Gramians is used to identify states that are neither well controllable nor well observable. Those states are either truncated or averaged out. In the other approach, the H2-error for a given reduced dimensionality is minimized by H2 optimal model reduction techniques, utilizing a bilinear iterative rational Krylov algorithm. The present work describes the MATLAB version of WavePacket 5.3.0 which is hosted and further developed at the Sourceforge platform, where also extensive Wiki-documentation as well as numerous worked-out demonstration examples with animated graphics can be found.

  20. Mixed Criticality Scheduling for Industrial Wireless Sensor Networks

    PubMed Central

    Jin, Xi; Xia, Changqing; Xu, Huiting; Wang, Jintao; Zeng, Peng

    2016-01-01

    Wireless sensor networks (WSNs) have been widely used in industrial systems. Their real-time performance and reliability are fundamental to industrial production. Many works have studied the two aspects, but only focus on single criticality WSNs. Mixed criticality requirements exist in many advanced applications in which different data flows have different levels of importance (or criticality). In this paper, first, we propose a scheduling algorithm, which guarantees the real-time performance and reliability requirements of data flows with different levels of criticality. The algorithm supports centralized optimization and adaptive adjustment. It is able to improve both the scheduling performance and flexibility. Then, we provide the schedulability test through rigorous theoretical analysis. We conduct extensive simulations, and the results demonstrate that the proposed scheduling algorithm and analysis significantly outperform existing ones. PMID:27589741

  1. A sustainable genetic algorithm for satellite resource allocation

    NASA Technical Reports Server (NTRS)

    Abbott, R. J.; Campbell, M. L.; Krenz, W. C.

    1995-01-01

    A hybrid genetic algorithm is used to schedule tasks for 8 satellites, which can be modelled as a robot whose task is to retrieve objects from a two dimensional field. The objective is to find a schedule that maximizes the value of objects retrieved. Typical of the real-world tasks to which this corresponds is the scheduling of ground contacts for a communications satellite. An important feature of our application is that the amount of time available for running the scheduler is not necessarily known in advance. This requires that the scheduler produce reasonably good results after a short period but that it also continue to improve its results if allowed to run for a longer period. We satisfy this requirement by developing what we call a sustainable genetic algorithm.

  2. TreeMAC: Localized TDMA MAC protocol for real-time high-data-rate sensor networks

    USGS Publications Warehouse

    Song, W.-Z.; Huang, R.; Shirazi, B.; Husent, R.L.

    2009-01-01

    Earlier sensor network MAC protocols focus on energy conservation in low-duty cycle applications, while some recent applications involve real-time high-data-rate signals. This motivates us to design an innovative localized TDMA MAC protocol to achieve high throughput and low congestion in data collection sensor networks, besides energy conservation. TreeMAC divides a time cycle into frames and frame into slots. Parent determines children's frame assigmnent based on their relative bandwidth demand, and each node calculates its own slot assignment based on its hop-count to the sink. This innovative 2-dimensional frame-slot assignment algorithm has the following nice theory properties. Firstly, given any node, at any time slot, there is at most one active sender in its neighborhood (includ ing itself). Secondly, the packet scheduling with TreelMAC is bufferless, which therefore minimizes the probability of network congestion. Thirdly, the data throughput to gateway is at least 1/3 of the optimum assuming reliable links. Our experiments on a 24 node test bed demonstrate that TreeMAC protocol significantly improves network throughput and energy efficiency, by comparing to the TinyOS's default CSMA MAC protocol and a recent TDMA MAC protocol Funneling-MAC[8]. ?? 2009 IEEE.

  3. Scheduling time-critical graphics on multiple processors

    NASA Technical Reports Server (NTRS)

    Meyer, Tom W.; Hughes, John F.

    1995-01-01

    This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.

  4. A high performance load balance strategy for real-time multicore systems.

    PubMed

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper.

  5. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  6. Research in network management techniques for tactical data communications networks

    NASA Astrophysics Data System (ADS)

    Boorstyn, R.; Kershenbaum, A.; Maglaris, B.; Sarachik, P.

    1982-09-01

    This is the final technical report for work performed on network management techniques for tactical data networks. It includes all technical papers that have been published during the control period. Research areas include Packet Network modelling, adaptive network routing, network design algorithms, network design techniques, and local area networks.

  7. Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization

    NASA Astrophysics Data System (ADS)

    Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.

    2018-06-01

    The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.

  8. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  9. Application guide for universal source encoding for space

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.

    1993-01-01

    Lossless data compression was studied for many NASA missions. The Rice algorithm was demonstrated to provide better performance than other available techniques on most scientific data. A top-level description of the Rice algorithm is first given, along with some new capabilities implemented in both software and hardware forms. Systems issues important for onboard implementation, including sensor calibration, error propagation, and data packetization, are addressed. The latter part of the guide provides twelve case study examples drawn from a broad spectrum of science instruments.

  10. A note on resource allocation scheduling with group technology and learning effects on a single machine

    NASA Astrophysics Data System (ADS)

    Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu

    2017-09-01

    In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.

  11. Improved NSGA model for multi objective operation scheduling and its evaluation

    NASA Astrophysics Data System (ADS)

    Li, Weining; Wang, Fuyu

    2017-09-01

    Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.

  12. Capability of the Maximax&Maximin selection operator in the evolutionary algorithm for a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Ramli, Razamin; Tein, Lim Huai

    2016-08-01

    A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.

  13. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  14. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  15. Electronically nonadiabatic wave packet propagation using frozen Gaussian scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kondorskiy, Alexey D., E-mail: kondor@sci.lebedev.ru; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp

    2015-09-21

    We present an approach, which allows to employ the adiabatic wave packet propagation technique and semiclassical theory to treat the nonadiabatic processes by using trajectory hopping. The approach developed generates a bunch of hopping trajectories and gives all additional information to incorporate the effect of nonadiabatic coupling into the wave packet dynamics. This provides an interface between a general adiabatic frozen Gaussian wave packet propagation method and the trajectory surface hopping technique. The basic idea suggested in [A. D. Kondorskiy and H. Nakamura, J. Chem. Phys. 120, 8937 (2004)] is revisited and complemented in the present work by the elaborationmore » of efficient numerical algorithms. We combine our approach with the adiabatic Herman-Kluk frozen Gaussian approximation. The efficiency and accuracy of the resulting method is demonstrated by applying it to popular benchmark model systems including three Tully’s models and 24D model of pyrazine. It is shown that photoabsorption spectrum is successfully reproduced by using a few hundreds of trajectories. We employ the compact finite difference Hessian update scheme to consider feasibility of the ab initio “on-the-fly” simulations. It is found that this technique allows us to obtain the reliable final results using several Hessian matrix calculations per trajectory.« less

  16. Design and implementation of a random neural network routing engine.

    PubMed

    Kocak, T; Seeber, J; Terzioglu, H

    2003-01-01

    Random neural network (RNN) is an analytically tractable spiked neural network model that has been implemented in software for a wide range of applications for over a decade. This paper presents the hardware implementation of the RNN model. Recently, cognitive packet networks (CPN) is proposed as an alternative packet network architecture where there is no routing table, instead the RNN based reinforcement learning is used to route packets. Particularly, we describe implementation details for the RNN based routing engine of a CPN network processor chip: the smart packet processor (SPP). The SPP is a dual port device that stores, modifies, and interprets the defining characteristics of multiple RNN models. In addition to hardware design improvements over the software implementation such as the dual access memory, output calculation step, and reduced output calculation module, this paper introduces a major modification to the reinforcement learning algorithm used in the original CPN specification such that the number of weight terms are reduced from 2n/sup 2/ to 2n. This not only yields significant memory savings, but it also simplifies the calculations for the steady state probabilities (neuron outputs in RNN). Simulations have been conducted to confirm the proper functionality for the isolated SPP design as well as for the multiple SPP's in a networked environment.

  17. IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications

    NASA Technical Reports Server (NTRS)

    Plaunt, Christian; Rajan, Kanna

    2004-01-01

    The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.

  18. Models for IP/MPLS routing performance: convergence, fast reroute, and QoS impact

    NASA Astrophysics Data System (ADS)

    Choudhury, Gagan L.

    2004-09-01

    We show how to model the black-holing and looping of traffic during an Interior Gateway Protocol (IGP) convergence event at an IP network and how to significantly improve both the convergence time and packet loss duration through IGP parameter tuning and algorithmic improvement. We also explore some congestion avoidance and congestion control algorithms that can significantly improve stability of networks in the face of occasional massive control message storms. Specifically we show the positive impacts of prioritizing Hello and Acknowledgement packets and slowing down LSA generation and retransmission generation on detecting congestion in the network. For some types of video, voice signaling and circuit emulation applications it is necessary to reduce traffic loss durations following a convergence event to below 100 ms and we explore that using Fast Reroute algorithms based on Multiprotocol Label Switching Traffic Engineering (MPLS-TE) that effectively bypasses IGP convergence. We explore the scalability of primary and backup MPLS-TE tunnels where MPLS-TE domain is in the backbone-only or edge-to-edge. We also show how much extra backbone resource is needed to support Fast Reroute and how can that be reduced by taking advantage of Constrained Shortest Path (CSPF) routing of MPLS-TE and by reserving less than 100% of primary tunnel bandwidth during Fast Reroute.

  19. Time-critical multirate scheduling using contemporary real-time operating system services

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1983-01-01

    Although real-time operating systems provide many of the task control services necessary to process time-critical applications (i.e., applications with fixed, invariant deadlines), it may still be necessary to provide a scheduling algorithm at a level above the operating system in order to coordinate a set of synchronized, time-critical tasks executing at different cyclic rates. The scheduling requirements for such applications and develops scheduling algorithms using services provided by contemporary real-time operating systems.

  20. The Autism Diagnostic Observation Schedule, Module 4: Revised Algorithm and Standardized Severity Scores

    ERIC Educational Resources Information Center

    Hus, Vanessa; Lord, Catherine

    2014-01-01

    The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4…

  1. Routing and scheduling of hazardous materials shipments: algorithmic approaches to managing spent nuclear fuel transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, R.G.

    Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network.more » Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay.« less

  2. Network congestion control algorithm based on Actor-Critic reinforcement learning model

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen

    2018-04-01

    Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.

  3. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System.

    PubMed

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-11-16

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  4. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    PubMed Central

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-01-01

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342

  5. Dynamic Hop Service Differentiation Model for End-to-End QoS Provisioning in Multi-Hop Wireless Networks

    NASA Astrophysics Data System (ADS)

    Youn, Joo-Sang; Seok, Seung-Joon; Kang, Chul-Hee

    This paper presents a new QoS model for end-to-end service provisioning in multi-hop wireless networks. In legacy IEEE 802.11e based multi-hop wireless networks, the fixed assignment of service classes according to flow's priority at every node causes priority inversion problem when performing end-to-end service differentiation. Thus, this paper proposes a new QoS provisioning model called Dynamic Hop Service Differentiation (DHSD) to alleviate the problem and support effective service differentiation between end-to-end nodes. Many previous works for QoS model through the 802.11e based service differentiation focus on packet scheduling on several service queues with different service rate and service priority. Our model, however, concentrates on a dynamic class selection scheme, called Per Hop Class Assignment (PHCA), in the node's MAC layer, which selects a proper service class for each packet, in accordance with queue states and service requirement, in every node along the end-to-end route of the packet. The proposed QoS solution is evaluated using the OPNET simulator. The simulation results show that the proposed model outperforms both best-effort and 802.11e based strict priority service models in mobile ad hoc environments.

  6. An implementation of the SNR high speed network communication protocol (Receiver part)

    NASA Astrophysics Data System (ADS)

    Wan, Wen-Jyh

    1995-03-01

    This thesis work is to implement the receiver pan of the SNR high speed network transport protocol. The approach was to use the Systems of Communicating Machines (SCM) as the formal definition of the protocol. Programs were developed on top of the Unix system using C programming language. The Unix system features that were adopted for this implementation were multitasking, signals, shared memory, semaphores, sockets, timers and process control. The problems encountered, and solved, were signal loss, shared memory conflicts, process synchronization, scheduling, data alignment and errors in the SCM specification itself. The result was a correctly functioning program which implemented the SNR protocol. The system was tested using different connection modes, lost packets, duplicate packets and large data transfers. The contributions of this thesis are: (1) implementation of the receiver part of the SNR high speed transport protocol; (2) testing and integration with the transmitter part of the SNR transport protocol on an FDDI data link layered network; (3) demonstration of the functions of the SNR transport protocol such as connection management, sequenced delivery, flow control and error recovery using selective repeat methods of retransmission; and (4) modifications to the SNR transport protocol specification such as corrections for incorrect predicate conditions, defining of additional packet types formats, solutions for signal lost and processes contention problems etc.

  7. Asymptotically reliable transport of multimedia/graphics over wireless channels

    NASA Astrophysics Data System (ADS)

    Han, Richard Y.; Messerschmitt, David G.

    1996-03-01

    We propose a multiple-delivery transport service tailored for graphics and video transported over connections with wireless access. This service operates at the interface between the transport and application layers, balancing the subjective delay and image quality objectives of the application with the low reliability and limited bandwidth of the wireless link. While techniques like forward-error correction, interleaving and retransmission improve reliability over wireless links, they also increase latency substantially when bandwidth is limited. Certain forms of interactive multimedia datatypes can benefit from an initial delivery of a corrupt packet to lower the perceptual latency, as long as reliable delivery occurs eventually. Multiple delivery of successively refined versions of the received packet, terminating when a sufficiently reliable version arrives, exploits the redundancy inherently required to improve reliability without a traffic penalty. Modifications to acknowledgment-repeat-request (ARQ) methods to implement this transport service are proposed, which we term `leaky ARQ'. For the specific case of pixel-coded window-based text/graphics, we describe additional functions needed to more effectively support urgent delivery and asymptotic reliability. X server emulation suggests that users will accept a multi-second delay between a (possibly corrupt) packet and the ultimate reliably-delivered version. The relaxed delay for reliable delivery can be exploited for traffic capacity improvement using scheduling of retransmissions.

  8. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  9. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  10. Hybrid glowworm swarm optimization for task scheduling in the cloud environment

    NASA Astrophysics Data System (ADS)

    Zhou, Jing; Dong, Shoubin

    2018-06-01

    In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.

  11. Electricity Usage Scheduling in Smart Building Environments Using Smart Devices

    PubMed Central

    Lee, Eunji; Bahn, Hyokyung

    2013-01-01

    With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%. PMID:24453860

  12. Electricity usage scheduling in smart building environments using smart devices.

    PubMed

    Lee, Eunji; Bahn, Hyokyung

    2013-01-01

    With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%.

  13. Solving multi-objective job shop scheduling problems using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    The efforts of finding optimal schedules for the job shop scheduling problems are highly important for many real-world industrial applications. In this paper, a multi-objective based job shop scheduling problem by simultaneously minimizing makespan and tardiness is taken into account. The problem is considered to be more complex due to the multiple business criteria that must be satisfied. To solve the problem more efficiently and to obtain a set of non-dominated solutions, a meta-heuristic based non-dominated sorting genetic algorithm is presented. In addition, task based representation is used for solution encoding, and tournament selection that is based on rank and crowding distance is applied for offspring selection. Swapping and insertion mutations are employed to increase diversity of population and to perform intensive search. To evaluate the modified non-dominated sorting genetic algorithm, a set of modified benchmarking job shop problems obtained from the OR-Library is used, and the results are considered based on the number of non-dominated solutions and quality of schedules obtained by the algorithm.

  14. Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Pan, Feng

    2007-09-01

    In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.

  15. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  16. Towards a hybrid energy efficient multi-tree-based optimized routing protocol for wireless networks.

    PubMed

    Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan

    2012-12-13

    This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm.

  17. Towards a Hybrid Energy Efficient Multi-Tree-Based Optimized Routing Protocol for Wireless Networks

    PubMed Central

    Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan

    2012-01-01

    This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm. PMID:23443398

  18. Distributed synchronization control of complex networks with communication constraints.

    PubMed

    Xu, Zhenhua; Zhang, Dan; Song, Hongbo

    2016-11-01

    This paper is concerned with the distributed synchronization control of complex networks with communication constraints. In this work, the controllers communicate with each other through the wireless network, acting as a controller network. Due to the constrained transmission power, techniques such as the packet size reduction and transmission rate reduction schemes are proposed which could help reduce communication load of the controller network. The packet dropout problem is also considered in the controller design since it is often encountered in networked control systems. We show that the closed-loop system can be modeled as a switched system with uncertainties and random variables. By resorting to the switched system approach and some stochastic system analysis method, a new sufficient condition is firstly proposed such that the exponential synchronization is guaranteed in the mean-square sense. The controller gains are determined by using the well-known cone complementarity linearization (CCL) algorithm. Finally, a simulation study is performed, which demonstrates the effectiveness of the proposed design algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster

    PubMed Central

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-01

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance. PMID:29360792

  20. Efficient genetic algorithms using discretization scheduling.

    PubMed

    McLay, Laura A; Goldberg, David E

    2005-01-01

    In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.

  1. Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.

    1995-01-01

    A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.

  2. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  3. A Novel Algorithm Combining Finite State Method and Genetic Algorithm for Solving Crude Oil Scheduling Problem

    PubMed Central

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  4. SARDA Surface Schedulers

    NASA Technical Reports Server (NTRS)

    Malik, Waqar

    2016-01-01

    Provide an overview of algorithms used in SARDA (Spot and Runway Departure Advisor) HITL (Human-in-the-Loop) simulation for Dallas Fort-Worth International Airport and Charlotte Douglas International airport. Outline a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the single runway scheduling (SRS) problem, and discuss heuristics to restrict the search space for the DP based algorithm and provide improvements.

  5. Multiobjective optimisation design for enterprise system operation in the case of scheduling problem with deteriorating jobs

    NASA Astrophysics Data System (ADS)

    Wang, Hongfeng; Fu, Yaping; Huang, Min; Wang, Junwei

    2016-03-01

    The operation process design is one of the key issues in the manufacturing and service sectors. As a typical operation process, the scheduling with consideration of the deteriorating effect has been widely studied; however, the current literature only studied single function requirement and rarely considered the multiple function requirements which are critical for a real-world scheduling process. In this article, two function requirements are involved in the design of a scheduling process with consideration of the deteriorating effect and then formulated into two objectives of a mathematical programming model. A novel multiobjective evolutionary algorithm is proposed to solve this model with combination of three strategies, i.e. a multiple population scheme, a rule-based local search method and an elitist preserve strategy. To validate the proposed model and algorithm, a series of randomly-generated instances are tested and the experimental results indicate that the model is effective and the proposed algorithm can achieve the satisfactory performance which outperforms the other state-of-the-art multiobjective evolutionary algorithms, such as nondominated sorting genetic algorithm II and multiobjective evolutionary algorithm based on decomposition, on all the test instances.

  6. Congestion control and routing over satellite networks

    NASA Astrophysics Data System (ADS)

    Cao, Jinhua

    Satellite networks and transmissions find their application in fields of computer communications, telephone communications, television broadcasting, transportation, space situational awareness systems and so on. This thesis mainly focuses on two networking issues affecting satellite networking: network congestion control and network routing optimization. Congestion, which leads to long queueing delays, packet losses or both, is a networking problem that has drawn the attention of many researchers. The goal of congestion control mechanisms is to ensure high bandwidth utilization while avoiding network congestion by regulating the rate at which traffic sources inject packets into a network. In this thesis, we propose a stable congestion controller using data-driven, safe switching control theory to improve the dynamic performance of satellite Transmission Control Protocol/Active Queue Management (TCP/AQM) networks. First, the stable region of the Proportional-Integral (PI) parameters for a nominal model is explored. Then, a PI controller, whose parameters are adaptively tuned by switching among members of a given candidate set, using observed plant data, is presented and compared with some classical AQM policy examples, such as Random Early Detection (RED) and fixed PI control. A new cost detectable switching law with an interval cost function switching algorithm, which improves the performance and also saves the computational cost, is developed and compared with a law commonly used in the switching control literature. Finite-gain stability of the system is proved. A fuzzy logic PI controller is incorporated as a special candidate to achieve good performance at all nominal points with the available set of candidate controllers. Simulations are presented to validate the theory. An effocient routing algorithm plays a key role in optimizing network resources. In this thesis, we briefly analyze Low Earth Orbit (LEO) satellite networks, review the Cross Entropy (CE) method and then develop a novel on-demand routing system named Cross Entropy Accelerated Ant Routing System (CEAARS) for regular constellation LEO satellite networks. By implementing simulations on an Iridium-like satellite network, we compare the proposed CEAARS algorithm with the two approaches to adaptive routing protocols on the Internet: distance-vector (DV) and link-state (LS), as well as with the original Cross Entropy Ant Routing System (CEARS). DV algorithms are based on distributed Bellman Ford algorithm, and LS algorithms are implementation of Dijkstras single source shortest path. The results show that CEAARS not only remarkably improves the convergence speed of achieving optimal or suboptimal paths, but also reduces the number of overhead ants (management packets).

  7. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  8. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  9. Dealing with Liars: Misbehavior Identification via Rényi-Ulam Games

    NASA Astrophysics Data System (ADS)

    Kozma, William; Lazos, Loukas

    We address the problem of identifying misbehaving nodes that refuse to forward packets in wireless multi-hop networks. We map the process of locating the misbehaving nodes to the classic Rényi-Ulam game of 20 questions. Compared to previous methods, our mapping allows the evaluation of node behavior on a per-packet basis, without the need for energy-expensive overhearing techniques or intensive acknowledgment schemes. Furthermore, it copes with colluding adversaries that coordinate their behavioral patterns to avoid identification and frame honest nodes. We show via simulations that our algorithms reduce the communication overhead for identifying misbehaving nodes by at least one order of magnitude compared to other methods, while increasing the identification delay logarithmically with the path size.

  10. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    PubMed Central

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  11. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    PubMed

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  12. QoS support over ultrafast TDM optical networks

    NASA Astrophysics Data System (ADS)

    Narvaez, Paolo; Siu, Kai-Yeung; Finn, Steven G.

    1999-08-01

    HLAN is a promising architecture to realize Tb/s access networks based on ultra-fast optical TDM technologies. This paper presents new research results on efficient algorithms for the support of quality of service over the HLAN network architecture. In particular, we propose a new scheduling algorithm that emulates fair queuing in a distributed manner for bandwidth allocation purpose. The proposed scheduler collects information on the queue of each host on the network and then instructs each host how much data to send. Our new scheduling algorithm ensures full bandwidth utilization, while guaranteeing fairness among all hosts.

  13. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah

    PubMed Central

    Tian, Le; Latré, Steven

    2017-01-01

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617

  14. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.

    PubMed

    Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen

    2017-07-04

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

  15. The Traffic Management Advisor

    NASA Technical Reports Server (NTRS)

    Nedell, William; Erzberger, Heinz; Neuman, Frank

    1990-01-01

    The traffic management advisor (TMA) is comprised of algorithms, a graphical interface, and interactive tools for controlling the flow of air traffic into the terminal area. The primary algorithm incorporated in it is a real-time scheduler which generates efficient landing sequences and landing times for arrivals within about 200 n.m. from touchdown. A unique feature of the TMA is its graphical interface that allows the traffic manager to modify the computer-generated schedules for specific aircraft while allowing the automatic scheduler to continue generating schedules for all other aircraft. The graphical interface also provides convenient methods for monitoring the traffic flow and changing scheduling parameters during real-time operation.

  16. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  17. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    PubMed

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  18. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization

    PubMed Central

    Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994

  19. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  20. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  1. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  2. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  3. Temporal Traffic Dynamics Improve the Connectivity of Ad Hoc Cognitive Radio Networks

    DTIC Science & Technology

    2014-02-12

    more packets to send, and are (re)born when they do. We could also consider this from a duty-cycling perspective: Nodes sleep and wake up...transmitting and receiving activities in the primary network in an intricate way, we obtain the MMD by considering a flooding scheme that tries every...consider the delay caused by scheduling, contention, or queuing. It can be shown that this flooding scheme gives us the MMD. We stress that flooding is used

  4. Distributed Topology Organization and Transmission Scheduling in Wireless Ad Hoc Networks

    DTIC Science & Technology

    2004-01-01

    slot consists of two mini -slots that support that support full-duplex transfer–one for master-to-slave transmission and the other from slave-to-master...protocol storage requirement would only be 208 bits in this case. 111 Communication requirements Each half-duplex mini -slot can be used by either a data or...either a data or control packet, eq. (4.2) sets the minimum half-duplex mini -slot size in the system or equivalently, the maximum system period

  5. GREEN + IDMaps: A practical soulution for ensuring fairness in a biased internet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapadia, A. C.; Thulasidasan, S.; Feng, W. C.

    2002-01-01

    GREEN is a proactive queue-management (PQM) algorithm that removes TCP's bias against connections with longer round-trip times, while maintaining high link utilization and low packet-loss. GREEN applies knowledge of the steady-state behavior of TCP connections to proactively drop packets, thus preventing congestion from ever occurring. As a result, GREEN ensures much higher fairness between flows than other active queue management schemes like Flow Random Early Drop (FRED) and Stochastic Fair Blue (SFB), which suffer in topologies where a large number of flows have widely varying round-trip times. GREEN'S performance relies on its ability to gauge a flow's round-trip time (RTT).more » In previous work, we presented results for an ideal GREEN router which has accurate RTT information for a flow. In this paper, we present a practical solution based on IDMaps, an Internet distance-estimation service, and compare its performance to an ideal GREEN router. We show that a solution based on IDMaps is practical and maintains high fairness and link utilization, and low packet-loss rates.« less

  6. Information Theory Filters for Wavelet Packet Coefficient Selection with Application to Corrosion Type Identification from Acoustic Emission Signals

    PubMed Central

    Van Dijck, Gert; Van Hulle, Marc M.

    2011-01-01

    The damage caused by corrosion in chemical process installations can lead to unexpected plant shutdowns and the leakage of potentially toxic chemicals into the environment. When subjected to corrosion, structural changes in the material occur, leading to energy releases as acoustic waves. This acoustic activity can in turn be used for corrosion monitoring, and even for predicting the type of corrosion. Here we apply wavelet packet decomposition to extract features from acoustic emission signals. We then use the extracted wavelet packet coefficients for distinguishing between the most important types of corrosion processes in the chemical process industry: uniform corrosion, pitting and stress corrosion cracking. The local discriminant basis selection algorithm can be considered as a standard for the selection of the most discriminative wavelet coefficients. However, it does not take the statistical dependencies between wavelet coefficients into account. We show that, when these dependencies are ignored, a lower accuracy is obtained in predicting the corrosion type. We compare several mutual information filters to take these dependencies into account in order to arrive at a more accurate prediction. PMID:22163921

  7. The Autism Diagnostic Observation Schedule, Module 4: Application of the Revised Algorithms in an Independent, Well-Defined, Dutch Sample (N = 93)

    ERIC Educational Resources Information Center

    de Bildt, Annelies; Sytema, Sjoerd; Meffert, Harma; Bastiaansen, Jojanneke A. C. J.

    2016-01-01

    This study examined the discriminative ability of the revised Autism Diagnostic Observation Schedule module 4 algorithm (Hus and Lord in "J Autism Dev Disord" 44(8):1996-2012, 2014) in 93 Dutch males with Autism Spectrum Disorder (ASD), schizophrenia, psychopathy or controls. Discriminative ability of the revised algorithm ASD cut-off…

  8. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  9. Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors.

    PubMed

    Dasgupta, Rumpa; Yoon, Seokhoon

    2017-04-01

    In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs' traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs' paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay.

  10. Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors

    PubMed Central

    Dasgupta, Rumpa; Yoon, Seokhoon

    2017-01-01

    In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs’ traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs’ paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay. PMID:28368300

  11. Representations and evolutionary operators for the scheduling of pump operations in water distribution networks.

    PubMed

    López-Ibáñez, Manuel; Prasad, T Devi; Paechter, Ben

    2011-01-01

    Reducing the energy consumption of water distribution networks has never had more significance. The greatest energy savings can be obtained by carefully scheduling the operations of pumps. Schedules can be defined either implicitly, in terms of other elements of the network such as tank levels; or explicitly, by specifying the time during which each pump is on/off. The traditional representation of explicit schedules is a string of binary values with each bit representing pump on/off status during a particular time interval. In this paper, we formally define and analyze two new explicit representations based on time-controlled triggers, where the maximum number of pump switches is established beforehand and the schedule may contain fewer than the maximum number of switches. In these representations, a pump schedule is divided into a series of integers with each integer representing the number of hours for which a pump is active/inactive. This reduces the number of potential schedules compared to the binary representation, and allows the algorithm to operate on the feasible region of the search space. We propose evolutionary operators for these two new representations. The new representations and their corresponding operations are compared with the two most-used representations in pump scheduling, namely, binary representation and level-controlled triggers. A detailed statistical analysis of the results indicates which parameters have the greatest effect on the performance of evolutionary algorithms. The empirical results show that an evolutionary algorithm using the proposed representations is an improvement over the results obtained by a recent state of the art hybrid genetic algorithm for pump scheduling using level-controlled triggers.

  12. Information-based management mode based on value network analysis for livestock enterprises

    NASA Astrophysics Data System (ADS)

    Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng

    2018-01-01

    With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.

  13. Random-access algorithms for multiuser computer communication networks. Doctoral thesis, 1 September 1986-31 August 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papantoni-Kazakos, P.; Paterakis, M.

    1988-07-01

    For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less

  14. Joint Cross-Layer Design for Wireless QoS Content Delivery

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Lv, Tiejun; Zheng, Haitao

    2005-12-01

    In this paper, we propose a joint cross-layer design for wireless quality-of-service (QoS) content delivery. Central to our proposed cross-layer design is the concept of adaptation. Adaptation represents the ability to adjust protocol stacks and applications to respond to channel variations. We focus our cross-layer design especially on the application, media access control (MAC), and physical layers. The network is designed based on our proposed fast frequency-hopping orthogonal frequency division multiplex (OFDM) technique. We also propose a QoS-awareness scheduler and a power adaptation transmission scheme operating at both the base station and mobile sides. The proposed MAC scheduler coordinates the transmissions of an IP base station and mobile nodes. The scheduler also selects appropriate transmission formats and packet priorities for individual users based on current channel conditions and the users' QoS requirements. The test results show that our cross-layer design provides an excellent framework for wireless QoS content delivery.

  15. Spacelab system analysis: The modified free access protocol: An access protocol for communication systems with periodic and Poisson traffic

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Owens, John; Daniel, Steven

    1989-01-01

    The protocol definition and terminal hardware for the modified free access protocol, a communications protocol similar to Ethernet, are developed. A MFA protocol simulator and a CSMA/CD math model are also developed. The protocol is tailored to communication systems where the total traffic may be divided into scheduled traffic and Poisson traffic. The scheduled traffic should occur on a periodic basis but may occur after a given event such as a request for data from a large number of stations. The Poisson traffic will include alarms and other random traffic. The purpose of the protocol is to guarantee that scheduled packets will be delivered without collision. This is required in many control and data collection systems. The protocol uses standard Ethernet hardware and software requiring minimum modifications to an existing system. The modification to the protocol only affects the Ethernet transmission privileges and does not effect the Ethernet receiver.

  16. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    NASA Astrophysics Data System (ADS)

    Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.

    2016-02-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.

  17. Routing and Addressing Problems in Large Metropolitan-Scale Internetworks. ISI Research Report.

    ERIC Educational Resources Information Center

    Finn, Gregory G.

    This report discusses some of the problems and limitations in existing internetwork design for the connection of packet-switching networks of different technologies and presents an algorithm that has been shown to be suitable for internetworks of unbounded size. Using a new form of address and a flat routing mechanism called Cartesian routing,…

  18. Decentralized Control of Scheduling in Distributed Systems.

    DTIC Science & Technology

    1983-03-18

    the job scheduling algorithm adapts to the changing busyness of the various hosts in the system. The environment in which the job scheduling entities...resources and processes that constitute the node and a set of interfaces for accessing these processes and resources. The structure of a node could change ...parallel. Chang [CHNG82] has also described some algorithms for detecting properties of general graphs by traversing paths in a graph in parallel. One of

  19. 40-Gbps optical backbone network deep packet inspection based on FPGA

    NASA Astrophysics Data System (ADS)

    Zuo, Yuan; Huang, Zhiping; Su, Shaojing

    2014-11-01

    In the era of information, the big data, which contains huge information, brings about some problems, such as high speed transmission, storage and real-time analysis and process. As the important media for data transmission, the Internet is the significant part for big data processing research. With the large-scale usage of the Internet, the data streaming of network is increasing rapidly. The speed level in the main fiber optic communication of the present has reached 40Gbps, even 100Gbps, therefore data on the optical backbone network shows some features of massive data. Generally, data services are provided via IP packets on the optical backbone network, which is constituted with SDH (Synchronous Digital Hierarchy). Hence this method that IP packets are directly mapped into SDH payload is named POS (Packet over SDH) technology. Aiming at the problems of real time process of high speed massive data, this paper designs a process system platform based on ATCA for 40Gbps POS signal data stream recognition and packet content capture, which employs the FPGA as the CPU. This platform offers pre-processing of clustering algorithms, service traffic identification and data mining for the following big data storage and analysis with high efficiency. Also, the operational procedure is proposed in this paper. Four channels of 10Gbps POS signal decomposed by the analysis module, which chooses FPGA as the kernel, are inputted to the flow classification module and the pattern matching component based on TCAM. Based on the properties of the length of payload and net flows, buffer management is added to the platform to keep the key flow information. According to data stream analysis, DPI (deep packet inspection) and flow balance distribute, the signal is transmitted to the backend machine through the giga Ethernet ports on back board. Practice shows that the proposed platform is superior to the traditional applications based on ASIC and NP.

  20. An Improved Scheduling Algorithm for Data Transmission in Ultrasonic Phased Arrays with Multi-Group Ultrasonic Sensors

    PubMed Central

    Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji

    2017-01-01

    High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345

  1. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balman, Mehmet; Kosar, Tevfik

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that aremore » accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.« less

  2. Complexity of line-seru conversion for different scheduling rules and two improved exact algorithms for the multi-objective optimization.

    PubMed

    Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei

    2016-01-01

    Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.

  3. Delivery of video-on-demand services using local storages within passive optical networks.

    PubMed

    Abeywickrama, Sandu; Wong, Elaine

    2013-01-28

    At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.

  4. Noise reduction algorithm with the soft thresholding based on the Shannon entropy and bone-conduction speech cross- correlation bands.

    PubMed

    Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam

    2018-01-01

    The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.

  5. Rate-based congestion control in networks with smart links, revision. B.S. Thesis - May 1988

    NASA Technical Reports Server (NTRS)

    Heybey, Andrew Tyrrell

    1990-01-01

    The author uses a network simulator to explore rate-based congestion control in networks with smart links that can feed back information to tell senders to adjust their transmission rates. This method differs in a very important way from congestion control in which a congested network component just drops packets - the most commonly used method. It is clearly advantageous for the links in the network to communicate with the end users about the network capacity, rather than the users unilaterally picking a transmission rate. The components in the middle of the network, not the end users, have information about the capacity and traffic in the network. The author experiments with three different algorithms for calculating the control rate to feed back to the users. All of the algorithms exhibit problems in the form of large queues when simulated with a configuration modeling the dynamics of a packet-voice system. However, the problems are not with the algorithms themselves, but with the fact that feedback takes time. If the network steady-state utilization is low enough that it can absorb transients in the traffic through it, then the large queues disappear. If the users are modified to start sending slowly, to allow the network to adapt to a new flow without causing congestion, a greater portion of the network's bandwidth can be used.

  6. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    PubMed

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  7. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    PubMed Central

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774

  8. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  9. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  10. A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks

    PubMed Central

    Gil, Joon-Min; Han, Youn-Hee

    2011-01-01

    As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387

  11. A discrete artificial bee colony algorithm incorporating differential evolution for the flow-shop scheduling problem with blocking

    NASA Astrophysics Data System (ADS)

    Han, Yu-Yan; Gong, Dunwei; Sun, Xiaoyan

    2015-07-01

    A flow-shop scheduling problem with blocking has important applications in a variety of industrial systems but is underrepresented in the research literature. In this study, a novel discrete artificial bee colony (ABC) algorithm is presented to solve the above scheduling problem with a makespan criterion by incorporating the ABC with differential evolution (DE). The proposed algorithm (DE-ABC) contains three key operators. One is related to the employed bee operator (i.e. adopting mutation and crossover operators of discrete DE to generate solutions with good quality); the second is concerned with the onlooker bee operator, which modifies the selected solutions using insert or swap operators based on the self-adaptive strategy; and the last is for the local search, that is, the insert-neighbourhood-based local search with a small probability is adopted to improve the algorithm's capability in exploitation. The performance of the proposed DE-ABC algorithm is empirically evaluated by applying it to well-known benchmark problems. The experimental results show that the proposed algorithm is superior to the compared algorithms in minimizing the makespan criterion.

  12. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    PubMed

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  13. Autonomous Hybrid Priority Queueing for Scheduling Residential Energy Demands

    NASA Astrophysics Data System (ADS)

    Kalimullah, I. Q.; Shamroukh, M.; Sahar, N.; Shetty, S.

    2017-05-01

    The advent of smart grid technologies has opened up opportunities to manage the energy consumption of the users within a residential smart grid system. Demand response management is particularly being employed to reduce the overall load on an electricity network which could in turn reduce outages and electricity costs. The objective of this paper is to develop an intelligible scheduler to optimize the energy available to a micro grid through hybrid queueing algorithm centered around the consumers’ energy demands. This is achieved by shifting certain schedulable load appliances to light load hours. Various factors such as the type of demand, grid load, consumers’ energy usage patterns and preferences are considered while formulating the logical constraints required for the algorithm. The algorithm thus obtained is then implemented in MATLAB workspace to simulate its execution by an Energy Consumption Scheduler (ECS) found within smart meters, which automatically finds the optimal energy consumption schedule tailor made to fit each consumer within the micro grid network.

  14. Design and implementation of priority and time-window based traffic scheduling and routing-spectrum allocation mechanism in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan

    2016-02-01

    With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.

  15. Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.

  16. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  17. Active Solution Space and Search on Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo

    In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.

  18. Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman

    2012-01-01

    In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.

  19. Fiber-wireless for smart grid: A survey

    NASA Astrophysics Data System (ADS)

    Radzi, NAM; Ridwan, MA; Din, NM; Abdullah, F.; Mustafa, IS; l-Mansoori, MH

    2017-11-01

    Smart grid allows two-way communication between power utility companies and their customers while having the ability to sense along the transmission lines. However, the downside is such, when the smart devices are transmitting data simultaneously, it results in network congestion. Fiber wireless (FiWi) network is one of the best congestion solutions for smart grid up to date. In this paper, a survey of current literature on FiWi for smart grid will be reviewed and a testbed to test the protocols and algorithms for FiWi in smart grid will be proposed. The results of number of packets received and delay vs packet transmitted obtained via the testbed are compared with the results obtained via simulation and they show that they are in line with each other, validating the accuracy of the testbed.

  20. Scheduling Results for the THEMIS Observation Scheduling Tool

    NASA Technical Reports Server (NTRS)

    Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip

    2011-01-01

    We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.

  1. The LSST OCS scheduler design

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Schumacher, German

    2014-08-01

    The Large Synoptic Survey Telescope (LSST) is a complex system of systems with demanding performance and operational requirements. The nature of its scientific goals requires a special Observatory Control System (OCS) and particularly a very specialized automatic Scheduler. The OCS Scheduler is an autonomous software component that drives the survey, selecting the detailed sequence of visits in real time, taking into account multiple science programs, the current external and internal conditions, and the history of observations. We have developed a SysML model for the OCS Scheduler that fits coherently in the OCS and LSST integrated model. We have also developed a prototype of the Scheduler that implements the scheduling algorithms in the simulation environment provided by the Operations Simulator, where the environment and the observatory are modeled with real weather data and detailed kinematics parameters. This paper expands on the Scheduler architecture and the proposed algorithms to achieve the survey goals.

  2. Hybrid Particle Swarm Optimization for Hybrid Flowshop Scheduling Problem with Maintenance Activities

    PubMed Central

    Li, Jun-qing; Pan, Quan-ke; Mao, Kun

    2014-01-01

    A hybrid algorithm which combines particle swarm optimization (PSO) and iterated local search (ILS) is proposed for solving the hybrid flowshop scheduling (HFS) problem with preventive maintenance (PM) activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron's benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm. PMID:24883414

  3. PWFQ: a priority-based weighted fair queueing algorithm for the downstream transmission of EPON

    NASA Astrophysics Data System (ADS)

    Xu, Sunjuan; Ye, Jiajun; Zou, Junni

    2005-11-01

    In the downstream direction of EPON, all ethernet frames share one downlink channel from the OLT to destination ONUs. To guarantee differentiated services, a scheduling algorithm is needed to solve the link-sharing issue. In this paper, we first review the classical WFQ algorithm and point out the shortcomings existing in the fair queueing principle of WFQ algorithm for EPON. Then we propose a novel scheduling algorithm called Priority-based WFQ (PWFQ) algorithm which distributes bandwidth based on priority. PWFQ algorithm can guarantee the quality of real-time services whether under light load or under heavy load. Simulation results also show that PWFQ algorithm not only can improve delay performance of real-time services, but can also meet the worst-case delay bound requirements.

  4. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  5. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  6. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  7. A ray tracing model of gravity wave propagation and breakdown in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Schoeberl, M. R.

    1985-01-01

    Gravity wave ray tracing and wave packet theory is used to parameterize wave breaking in the mesosphere. Rays are tracked by solving the group velocity equations, and the interaction with the basic state is determined by considering the evolution of the packet wave action density. The ray tracing approach has a number of advantages over the steady state parameterization as the effects of gravity wave focussing and refraction, local dissipation, and wave response to rapid changes in the mean flow are more realistically considered; however, if steady state conditions prevail, the method gives identical results. The ray tracing algorithm is tested using both interactive and noninteractive models of the basic state. In the interactive model, gravity wave interaction with the polar night jet on a beta-plane is considered. The algorithm produces realistic polar night jet closure for weak topographic forcing of gravity waves. Planetary scale waves forced by local transfer of wave action into the basic flow in turn transfer their wave action into the zonal mean flow. Highly refracted rays are also found not to contribute greatly to the climatology of the mesosphere, as their wave action is severely reduced by dissipation during their lateral travel.

  8. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  9. Research on schedulers for astronomical observatories

    NASA Astrophysics Data System (ADS)

    Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian

    2012-09-01

    The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.

  10. A hybrid dynamic harmony search algorithm for identical parallel machines scheduling

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Pan, Quan-Ke; Wang, Ling; Li, Jun-Qing

    2012-02-01

    In this article, a dynamic harmony search (DHS) algorithm is proposed for the identical parallel machines scheduling problem with the objective to minimize makespan. First, an encoding scheme based on a list scheduling rule is developed to convert the continuous harmony vectors to discrete job assignments. Second, the whole harmony memory (HM) is divided into multiple small-sized sub-HMs, and each sub-HM performs evolution independently and exchanges information with others periodically by using a regrouping schedule. Third, a novel improvisation process is applied to generate a new harmony by making use of the information of harmony vectors in each sub-HM. Moreover, a local search strategy is presented and incorporated into the DHS algorithm to find promising solutions. Simulation results show that the hybrid DHS (DHS_LS) is very competitive in comparison to its competitors in terms of mean performance and average computational time.

  11. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  12. A Scheduling Algorithm for the Distributed Student Registration System in Transaction-Intensive Environment

    ERIC Educational Resources Information Center

    Li, Wenhao

    2011-01-01

    Distributed workflow technology has been widely used in modern education and e-business systems. Distributed web applications have shown cross-domain and cooperative characteristics to meet the need of current distributed workflow applications. In this paper, the author proposes a dynamic and adaptive scheduling algorithm PCSA (Pre-Calculated…

  13. Algorithms for Scheduling and Network Problems

    DTIC Science & Technology

    1991-09-01

    time. We already know, by Lemma 2.2.1, that WOPT = O(log( mpU )), so if we could solve this integer program optimally we would be done. However, the...Folydirat, 15:177-191, 1982. [6] I.S. Belov and Ya. N. Stolin. An algorithm in a single path operations scheduling problem. In Mathematical Economics and

  14. Electromagnetic interference-aware transmission scheduling and power control for dynamic wireless access in hospital environments.

    PubMed

    Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio

    2011-11-01

    We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.

  15. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology.

    PubMed

    Rana, Md Masud

    2017-01-01

    This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.

  16. Variable Scheduling to Mitigate Channel Losses in Energy-Efficient Body Area Networks

    PubMed Central

    Tselishchev, Yuriy; Boulis, Athanassios; Libman, Lavy

    2012-01-01

    We consider a typical body area network (BAN) setting in which sensor nodes send data to a common hub regularly on a TDMA basis, as defined by the emerging IEEE 802.15.6 BAN standard. To reduce transmission losses caused by the highly dynamic nature of the wireless channel around the human body, we explore variable TDMA scheduling techniques that allow the order of transmissions within each TDMA round to be decided on the fly, rather than being fixed in advance. Using a simple Markov model of the wireless links, we devise a number of scheduling algorithms that can be performed by the hub, which aim to maximize the expected number of successful transmissions in a TDMA round, and thereby significantly reduce transmission losses as compared with a static TDMA schedule. Importantly, these algorithms do not require a priori knowledge of the statistical properties of the wireless channels, and the reliability improvement is achieved entirely via shuffling the order of transmissions among devices, and does not involve any additional energy consumption (e.g., retransmissions). We evaluate these algorithms directly on an experimental set of traces obtained from devices strapped to human subjects performing regular daily activities, and confirm that the benefits of the proposed variable scheduling algorithms extend to this practical setup as well. PMID:23202183

  17. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  18. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  19. Measuring the effects of heterogeneity on distributed systems

    NASA Technical Reports Server (NTRS)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  20. SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems

    NASA Astrophysics Data System (ADS)

    Ghatpande, Abhay; Nakazato, Hidenori; Beaumont, Olivier; Watanabe, Hiroshi

    Divisible Load Theory (DLT) is an established mathematical framework to study Divisible Load Scheduling (DLS). However, traditional DLT does not address the scheduling of results back to source (i. e., result collection), nor does it comprehensively deal with system heterogeneity. In this paper, the DLSRCHETS (DLS with Result Collection on HET-erogeneous Systems) problem is addressed. The few papers to date that have dealt with DLSRCHETS, proposed simplistic LIFO (Last In, First Out) and FIFO (First In, First Out) type of schedules as solutions to DLSRCHETS. In this paper, a new polynomial time heuristic algorithm, SPORT (System Parameters based Optimized Result Transfer), is proposed as a solution to the DLSRCHETS problem. With the help of simulations, it is proved that the performance of SPORT is significantly better than existing algorithms. The other major contributions of this paper include, for the first time ever, (a) the derivation of the condition to identify the presence of idle time in a FIFO schedule for two processors, (b) the identification of the limiting condition for the optimality of FIFO and LIFO schedules for two processors, and (c) the introduction of the concept of equivalent processor in DLS for heterogeneous systems with result collection.

  1. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  2. Determination of the Underlying Task Scheduling Algorithm for an Ada Runtime System

    DTIC Science & Technology

    1989-12-01

    was also curious as to how well I could model the test cases with Ada programs . In particular, I wanted to see whether I could model the equal arrival...parameter relationshis=s required to detect the execution of individual algorithms. These test cases were modeled using Ada programs . Then, the...results were analyzed to determine whether the Ada programs were capable of revealing the task scheduling algorithm used by the Ada run-time system. This

  3. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  4. An advanced approach to traditional round robin CPU scheduling algorithm to prioritize processes with residual burst time nearest to the specified time quantum

    NASA Astrophysics Data System (ADS)

    Swaraj Pati, Mythili N.; Korde, Pranav; Dey, Pallav

    2017-11-01

    The purpose of this paper is to introduce an optimised variant to the round robin scheduling algorithm. Every algorithm works in its own way and has its own merits and demerits. The proposed algorithm overcomes the shortfalls of the existing scheduling algorithms in terms of waiting time, turnaround time, throughput and number of context switches. The algorithm is pre-emptive and works based on the priority of the associated processes. The priority is decided on the basis of the remaining burst time of a particular process, that is; lower the burst time, higher the priority and higher the burst time, lower the priority. To complete the execution, a time quantum is initially specified. In case if the burst time of a particular process is less than 2X of the specified time quantum but more than 1X of the specified time quantum; the process is given high priority and is allowed to execute until it completes entirely and finishes. Such processes do not have to wait for their next burst cycle.

  5. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  6. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    NASA Astrophysics Data System (ADS)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  7. An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks

    PubMed Central

    Penumalli, Chakradhar; Palanichamy, Yogesh

    2015-01-01

    A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627

  8. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  9. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    PubMed

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  10. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks

    PubMed Central

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-01-01

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062

  11. Distributed Denial of Service Attack Source Detection Using Efficient Traceback Technique (ETT) in Cloud-Assisted Healthcare Environment.

    PubMed

    Latif, Rabia; Abbas, Haider; Latif, Seemab; Masood, Ashraf

    2016-07-01

    Security and privacy are the first and foremost concerns that should be given special attention when dealing with Wireless Body Area Networks (WBANs). As WBAN sensors operate in an unattended environment and carry critical patient health information, Distributed Denial of Service (DDoS) attack is one of the major attacks in WBAN environment that not only exhausts the available resources but also influence the reliability of information being transmitted. This research work is an extension of our previous work in which a machine learning based attack detection algorithm is proposed to detect DDoS attack in WBAN environment. However, in order to avoid complexity, no consideration was given to the traceback mechanism. During traceback, the challenge lies in reconstructing the attack path leading to identify the attack source. Among existing traceback techniques, Probabilistic Packet Marking (PPM) approach is the most commonly used technique in conventional IP- based networks. However, since marking probability assignment has significant effect on both the convergence time and performance of a scheme, it is not directly applicable in WBAN environment due to high convergence time and overhead on intermediate nodes. Therefore, in this paper we have proposed a new scheme called Efficient Traceback Technique (ETT) based on Dynamic Probability Packet Marking (DPPM) approach and uses MAC header in place of IP header. Instead of using fixed marking probability, the proposed scheme uses variable marking probability based on the number of hops travelled by a packet to reach the target node. Finally, path reconstruction algorithms are proposed to traceback an attacker. Evaluation and simulation results indicate that the proposed solution outperforms fixed PPM in terms of convergence time and computational overhead on nodes.

  12. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  13. SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Rueff; Lyle Roybal; Denis Vollmer

    2013-01-01

    There is a significant need to protect the nation’s energy infrastructures from malicious actors using cyber methods. Supervisory, Control, and Data Acquisition (SCADA) systems may be vulnerable due to the insufficient security implemented during the design and deployment of these control systems. This is particularly true in older legacy SCADA systems that are still commonly in use. The purpose of INL’s research on the SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) project was to determine if and how data compression techniques could be used to identify and protect SCADA systems from cyber attacks. Initially, the concept was centered on howmore » to train a compression algorithm to recognize normal control system traffic versus hostile network traffic. Because large portions of the TCP/IP message traffic (called packets) are repetitive, the concept of using compression techniques to differentiate “non-normal” traffic was proposed. In this manner, malicious SCADA traffic could be identified at the packet level prior to completing its payload. Previous research has shown that SCADA network traffic has traits desirable for compression analysis. This work investigated three different approaches to identify malicious SCADA network traffic using compression techniques. The preliminary analyses and results presented herein are clearly able to differentiate normal from malicious network traffic at the packet level at a very high confidence level for the conditions tested. Additionally, the master dictionary approach used in this research appears to initially provide a meaningful way to categorize and compare packets within a communication channel.« less

  14. A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling

    PubMed Central

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP. PMID:24453841

  15. Reliable Transport over SpaceWire for James Webb Space Telescope (JWST) Focal Plane Electronics (FPE) Network

    NASA Technical Reports Server (NTRS)

    Rakow, Glenn; Schnurr, Richard; Dailey, Christopher; Shakoorzadeh, Kamdin

    2003-01-01

    NASA's James Webb Space Telescope (JWST) faces difficult technical and budgetary challenges to overcome before it is scheduled launch in 2010. The Integrated Science Instrument Module (ISIM), shares these challenges. The major challenge addressed in this paper is the data network used to collect, process, compresses and store Infrared data. A total of 114 Mbps of raw information must be collected from 19 sources and delivered to the two redundant data processing units across a twenty meter deployed thermally restricted interface. Further data must be transferred to the solid-state recorder and the spacecraft. The JWST detectors are kept at cryogenic temperatures to obtain the sensitivity necessary to measure faint energy sources. The Focal Plane Electronics (FPE) that sample the detector, generate packets from the samples, and transmit these packets to the processing electronics must dissipate little power in order to help keep the detectors at these cold temperatures. Separating the low powered front-end electronics from the higher-powered processing electronics, and using a simple high-speed protocol to transmit the detector data minimize the power dissipation near the detectors. Low Voltage Differential Signaling (LVDS) drivers were considered an obvious choice for physical layer because of their high speed and low power. The mechanical restriction on the number cables across the thermal interface force the Image packets to be concentrated upon two high-speed links. These links connect the many image packet sources, Focal Plane Electronics (FPE), located near the cryogenic detectors to the processing electronics on the spacecraft structure. From 12 to 10,000 seconds of raw data are processed to make up an image, various algorithms integrate the pixel data Loss of commands to configure the detectors as well as the loss of science data itself may cause inefficiency in the use of the telescope that are unacceptable given the high cost of the observatory. This combination of requirements necessitates a redundant, fault tolerant, high- speed, low mass, low power network with a low Bit error Rate(1E-9- 1E-12). The ISIM systems team performed many studies of the various network architectures that meeting these requirements. The architecture selected uses the Spacewire protocol, with the addition of a new transport and network layer added to implement end-to-end reliable transport. The network and reliable transport mechanism must be implemented in hardware because of the high average information rate and the restriction on the ability of the detectors to buffer data due to power and size restrictions. This network and transport mechanism was designed to be compatible with existing Spacewire links and routers so that existing equipment and designs may be leveraged upon. The transport layer specification is being coordinated with European Space Agency (ESA), Spacewire Working Group and the Consultative Committee for Space Data System (CCSDS) PlK Standard Onboard Interface (SOIF) panel, with the intent of developing a standard for reliable transport for Spacewire. Changes to the protocol presented are likely since negotiations are ongoing with these groups. A block of RTL VHDL that implements a multi-port Spacewire router with an external user interface will be developed and integrated with an existing Spacewire Link design. The external user interface will be the local interface that sources and sinks packets onto and off of the network (Figure 3). The external user interface implements the network and transport layer and handles acknowledgements and re-tries of packets for reliable transport over the network. Because the design is written in RTL, it may be ported to any technology but will initially be targeted to the new Actel Accelerator series (AX) part. Each link will run at 160 Mbps and the power will be about 0.165 Watt per link worst case in the Actel AX.

  16. President Bill Clinton visits JSC

    NASA Image and Video Library

    1998-04-14

    S98-05017 (14 April 1998) --- President Bill Clinton prepares to use a fork to sample some space food while visiting NASA's Johnson Space Center (JSC). Holding the food packet is U.S. Sen. John H. Glenn Jr. (D.-Ohio), currently in training at JSC as a payload specialist for a flight scheduled later this year aboard the Space Shuttle Discovery. Looking on is astronaut Curtis L. Brown Jr., STS-95 commander. The picture was taken in the full fuselage trainer (FFT). Photo Credit: Joe McNally, National Geographic, for NASA

  17. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  18. Decoherence and surface hopping: When can averaging over initial conditions help capture the effects of wave packet separation?

    NASA Astrophysics Data System (ADS)

    Subotnik, Joseph E.; Shenvi, Neil

    2011-06-01

    Fewest-switches surface hopping (FSSH) is a popular nonadiabatic dynamics method which treats nuclei with classical mechanics and electrons with quantum mechanics. In order to simulate the motion of a wave packet as accurately as possible, standard FSSH requires a stochastic sampling of the trajectories over a distribution of initial conditions corresponding, e.g., to the Wigner distribution of the initial quantum wave packet. Although it is well-known that FSSH does not properly account for decoherence effects, there is some confusion in the literature about whether or not this averaging over a distribution of initial conditions can approximate some of the effects of decoherence. In this paper, we not only show that averaging over initial conditions does not generally account for decoherence, but also why it fails to do so. We also show how an apparent improvement in accuracy can be obtained for a fortuitous choice of model problems, even though this improvement is not possible, in general. For a basic set of one-dimensional and two-dimensional examples, we find significantly improved results using our recently introduced augmented FSSH algorithm.

  19. Improved routing strategy based on gravitational field theory

    NASA Astrophysics Data System (ADS)

    Song, Hai-Quan; Guo, Jin

    2015-10-01

    Routing and path selection are crucial for many communication and logistic applications. We study the interaction between nodes and packets and establish a simple model for describing the attraction of the node to the packet in transmission process by using the gravitational field theory, considering the real and potential congestion of the nodes. On the basis of this model, we propose a gravitational field routing strategy that considers the attractions of all of the nodes on the travel path to the packet. In order to illustrate the efficiency of proposed routing algorithm, we introduce the order parameter to measure the throughput of the network by the critical value of phase transition from a free flow phase to a congested phase, and study the distribution of betweenness centrality and traffic jam. Simulations show that, compared with the shortest path routing strategy, the gravitational field routing strategy considerably enhances the throughput of the network and balances the traffic load, and nearly all of the nodes are used efficiently. Project supported by the Technology and Development Research Project of China Railway Corporation (Grant No. 2012X007-D) and the Key Program of Technology and Development Research Foundation of China Railway Corporation (Grant No. 2012X003-A).

  20. Adaptive packet switch with an optical core (demonstrator)

    NASA Astrophysics Data System (ADS)

    Abdo, Ahmad; Bishtein, Vadim; Clark, Stewart A.; Dicorato, Pino; Lu, David T.; Paredes, Sofia A.; Taebi, Sareh; Hall, Trevor J.

    2004-11-01

    A three-stage opto-electronic packet switch architecture is described consisting of a reconfigurable optical centre stage surrounded by two electronic buffering stages partitioned into sectors to ease memory contention. A Flexible Bandwidth Provision (FBP) algorithm, implemented on a soft-core processor, is used to change the configuration of the input sectors and optical centre stage to set up internal paths that will provide variable bandwidth to serve the traffic. The switch is modeled by a bipartite graph built from a service matrix, which is a function of the arriving traffic. The bipartite graph is decomposed by solving an edge-colouring problem and the resulting permutations are used to configure the switch. Simulation results show that this architecture exhibits a dramatic reduction of complexity and increased potential for scalability, at the price of only a modest spatial speed-up k, 1

  1. Maximization of the Supportable Number of Sensors in QoS-Aware Cluster-Based Underwater Acoustic Sensor Networks

    PubMed Central

    Nguyen, Thi-Tham; Van Le, Duc; Yoon, Seokhoon

    2014-01-01

    This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class. PMID:24608009

  2. Maximization of the supportable number of sensors in QoS-aware cluster-based underwater acoustic sensor networks.

    PubMed

    Nguyen, Thi-Tham; Le, Duc Van; Yoon, Seokhoon

    2014-03-07

    This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class.

  3. Fair and efficient network congestion control based on minority game

    NASA Astrophysics Data System (ADS)

    Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang

    2011-12-01

    Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.

  4. SFTP: A Secure and Fault-Tolerant Paradigm against Blackhole Attack in MANET

    NASA Astrophysics Data System (ADS)

    KumarRout, Jitendra; Kumar Bhoi, Sourav; Kumar Panda, Sanjaya

    2013-02-01

    Security issues in MANET are a challenging task nowadays. MANETs are vulnerable to passive attacks and active attacks because of a limited number of resources and lack of centralized authority. Blackhole attack is an attack in network layer which degrade the network performance by dropping the packets. In this paper, we have proposed a Secure Fault-Tolerant Paradigm (SFTP) which checks the Blackhole attack in the network. The three phases used in SFTP algorithm are designing of coverage area to find the area of coverage, Network Connection algorithm to design a fault-tolerant model and Route Discovery algorithm to discover the route and data delivery from source to destination. SFTP gives better network performance by making the network fault free.

  5. On securing wireless sensor network--novel authentication scheme against DOS attacks.

    PubMed

    Raja, K Nirmal; Beno, M Marsaline

    2014-10-01

    Wireless sensor networks are generally deployed for collecting data from various environments. Several applications specific sensor network cryptography algorithms have been proposed in research. However WSN's has many constrictions, including low computation capability, less memory, limited energy resources, vulnerability to physical capture, which enforce unique security challenges needs to make a lot of improvements. This paper presents a novel security mechanism and algorithm for wireless sensor network security and also an application of this algorithm. The proposed scheme is given to strong authentication against Denial of Service Attacks (DOS). The scheme is simulated using network simulator2 (NS2). Then this scheme is analyzed based on the network packet delivery ratio and found that throughput has improved.

  6. A traveling-salesman-based approach to aircraft scheduling in the terminal area

    NASA Technical Reports Server (NTRS)

    Luenberger, Robert A.

    1988-01-01

    An efficient algorithm is presented, based on the well-known algorithm for the traveling salesman problem, for scheduling aircraft arrivals into major terminal areas. The algorithm permits, but strictly limits, reassigning an aircraft from its initial position in the landing order. This limitation is needed so that no aircraft or aircraft category is unduly penalized. Results indicate, for the mix of arrivals investigated, a potential increase in capacity in the 3 to 5 percent range. Furthermore, it is shown that the computation time for the algorithm grows only linearly with problem size.

  7. Design and analysis of a model predictive controller for active queue management.

    PubMed

    Wang, Ping; Chen, Hong; Yang, Xiaoping; Ma, Yan

    2012-01-01

    Model predictive (MP) control as a novel active queue management (AQM) algorithm in dynamic computer networks is proposed. According to the predicted future queue length in the data buffer, early packets at the router are dropped reasonably by the MPAQM controller so that the queue length reaches the desired value with minimal tracking error. The drop probability is obtained by optimizing the network performance. Further, randomized algorithms are applied to analyze the robustness of MPAQM successfully, and also to provide the stability domain of systems with uncertain network parameters. The performances of MPAQM are evaluated through a series of simulations in NS2. The simulation results show that the MPAQM algorithm outperforms RED, PI, and REM algorithms in terms of stability, disturbance rejection, and robustness. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Automated Scheduling of Personnel to Staff Operations for the Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Knight, Russell; Mishkin, Andrew; Allbaugh, Alicia

    2014-01-01

    Leveraging previous work on scheduling personnel for space mission operations, we have adapted ASPEN (Activity Scheduling and Planning Environment) [1] to the domain of scheduling personnel for operations of the Mars Science Laboratory. Automated scheduling of personnel is not new. We compare our representations to a sampling of employee scheduling systems available with respect to desired features. We described the constraints required by MSL personnel schedulers and how each is handled by the scheduling algorithm.

  9. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log data through resource-limited nodes and locates routines closer to that data. Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding, Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.

  10. Rolling scheduling of electric power system with wind power based on improved NNIA algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.

    2017-11-01

    This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.

  11. Resource-constrained scheduling with hard due windows and rejection penalties

    NASA Astrophysics Data System (ADS)

    Garcia, Christopher

    2016-09-01

    This work studies a scheduling problem where each job must be either accepted and scheduled to complete within its specified due window, or rejected altogether. Each job has a certain processing time and contributes a certain profit if accepted or penalty cost if rejected. There is a set of renewable resources, and no resource limit can be exceeded at any time. Each job requires a certain amount of each resource when processed, and the objective is to maximize total profit. A mixed-integer programming formulation and three approximation algorithms are presented: a priority rule heuristic, an algorithm based on the metaheuristic for randomized priority search and an evolutionary algorithm. Computational experiments comparing these four solution methods were performed on a set of generated benchmark problems covering a wide range of problem characteristics. The evolutionary algorithm outperformed the other methods in most cases, often significantly, and never significantly underperformed any method.

  12. Efficient Computation of Separation-Compliant Speed Advisories for Air Traffic Arriving in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.

    2012-01-01

    A class of problems in air traffic management asks for a scheduling algorithm that supplies the air traffic services authority not only with a schedule of arrivals and departures, but also with speed advisories. Since advisories must be finite, a scheduling algorithm must ultimately produce a finite data set, hence must either start with a purely discrete model or involve a discretization of a continuous one. The former choice, often preferred for intuitive clarity, naturally leads to mixed-integer programs, hindering proofs of correctness and computational cost bounds (crucial for real-time operations). In this paper, a hybrid control system is used to model air traffic scheduling, capturing both the discrete and continuous aspects. This framework is applied to a class of problems, called the Fully Routed Nominal Problem. We prove a number of geometric results on feasible schedules and use these results to formulate an algorithm that attempts to compute a collective speed advisory, effectively finite, and has computational cost polynomial in the number of aircraft. This work is a first step toward optimization and models refined with more realistic detail.

  13. End-to-end communication test on variable length packet structures utilizing AOS testbed

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Sank, V.; Fong, Wai; Miko, J.; Powers, M.; Folk, John; Conaway, B.; Michael, K.; Yeh, Pen-Shu

    1994-01-01

    This paper describes a communication test, which successfully demonstrated the transfer of losslessly compressed images in an end-to-end system. These compressed images were first formatted into variable length Consultative Committee for Space Data Systems (CCSDS) packets in the Advanced Orbiting System Testbed (AOST). The CCSDS data Structures were transferred from the AOST to the Radio Frequency Simulations Operations Center (RFSOC), via a fiber optic link, where data was then transmitted through the Tracking and Data Relay Satellite System (TDRSS). The received data acquired at the White Sands Complex (WSC) was transferred back to the AOST where the data was captured and decompressed back to the original images. This paper describes the compression algorithm, the AOST configuration, key flight components, data formats, and the communication link characteristics and test results.

  14. Quasi-soliton scattering in quantum spin chains

    NASA Astrophysics Data System (ADS)

    Vlijm, R.; Ganahl, M.; Fioretto, D.; Brockmann, M.; Haque, M.; Evertz, H. G.; Caux, J.-S.

    2015-12-01

    The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time-evolution fits on the displacements. The time-evolved block decimation algorithm allows for the study of scattering displacements from spin-block states, showing similar scattering displacement features.

  15. Quasi-soliton scattering in quantum spin chains

    NASA Astrophysics Data System (ADS)

    Fioretto, Davide; Vljim, Rogier; Ganahl, Martin; Brockmann, Michael; Haque, Masud; Evertz, Hans-Gerd; Caux, Jean-Sébastien

    The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time evolution fits on the displacements. The TEBD algorithm allows for the study of scattering displacements from spin-block states, showing similar displacement scattering features.

  16. Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.

    PubMed

    Yu, Zhang; Yang, Xiaomei

    2013-01-01

    By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.

  17. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  18. Comparison of multiobjective evolutionary algorithms for operations scheduling under machine availability constraints.

    PubMed

    Frutos, M; Méndez, M; Tohmé, F; Broz, D

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.

  19. A computer method for schedule processing and quick-time updating.

    NASA Technical Reports Server (NTRS)

    Mccoy, W. H.

    1972-01-01

    A schedule analysis program is presented which can be used to process any schedule with continuous flow and with no loops. Although generally thought of as a management tool, it has applicability to such extremes as music composition and computer program efficiency analysis. Other possibilities for its use include the determination of electrical power usage during some operation such as spacecraft checkout, and the determination of impact envelopes for the purpose of scheduling payloads in launch processing. At the core of the described computer method is an algorithm which computes the position of each activity bar on the output waterfall chart. The algorithm is basically a maximal-path computation which gives to each node in the schedule network the maximal path from the initial node to the given node.

  20. Scheduling job shop - A case study

    NASA Astrophysics Data System (ADS)

    Abas, M.; Abbas, A.; Khan, W. A.

    2016-08-01

    The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

  1. Operating room scheduling using hybrid clustering priority rule and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Santoso, Linda Wahyuni; Sinawan, Aisyah Ashrinawati; Wijaya, Andi Rahadiyan; Sudiarso, Andi; Masruroh, Nur Aini; Herliansyah, Muhammad Kusumawan

    2017-11-01

    Operating room is a bottleneck resource in most hospitals so that operating room scheduling system will influence the whole performance of the hospitals. This research develops a mathematical model of operating room scheduling for elective patients which considers patient priority with limit number of surgeons, operating rooms, and nurse team. Clustering analysis was conducted to the data of surgery durations using hierarchical and non-hierarchical methods. The priority rule of each resulting cluster was determined using Shortest Processing Time method. Genetic Algorithm was used to generate daily operating room schedule which resulted in the lowest values of patient waiting time and nurse overtime. The computational results show that this proposed model reduced patient waiting time by approximately 32.22% and nurse overtime by approximately 32.74% when compared to actual schedule.

  2. A Simulation Based Approach to Optimize Berth Throughput Under Uncertainty at Marine Container Terminals

    NASA Technical Reports Server (NTRS)

    Golias, Mihalis M.

    2011-01-01

    Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.

  3. Random early detection for congestion avoidance in wired networks: a discretized pursuit learning-automata-like solution.

    PubMed

    Misra, Sudip; Oommen, B John; Yanamandra, Sreekeerthy; Obaidat, Mohammad S

    2010-02-01

    In this paper, we present a learning-automata-like The reason why the mechanism is not a pure LA, but rather why it yet mimics one, will be clarified in the body of this paper. (LAL) mechanism for congestion avoidance in wired networks. Our algorithm, named as LAL Random Early Detection (LALRED), is founded on the principles of the operations of existing RED congestion-avoidance mechanisms, augmented with a LAL philosophy. The primary objective of LALRED is to optimize the value of the average size of the queue used for congestion avoidance and to consequently reduce the total loss of packets at the queue. We attempt to achieve this by stationing a LAL algorithm at the gateways and by discretizing the probabilities of the corresponding actions of the congestion-avoidance algorithm. At every time instant, the LAL scheme, in turn, chooses the action that possesses the maximal ratio between the number of times the chosen action is rewarded and the number of times that it has been chosen. In LALRED, we simultaneously increase the likelihood of the scheme converging to the action, which minimizes the number of packet drops at the gateway. Our approach helps to improve the performance of congestion avoidance by adaptively minimizing the queue-loss rate and the average queue size. Simulation results obtained using NS2 establish the improved performance of LALRED over the traditional RED methods which were chosen as the benchmarks for performance comparison purposes.

  4. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  5. Network Speech Systems Technology Program.

    DTIC Science & Technology

    1980-09-30

    ognized that the lumped-speaker approximation could be extended even more generally to include cases of combined circuit-switched speech and packet...based on these tables. The first function is an im- portant element of the more general task of system control for a switched network, which in...programs are in preparation, as described below, for both steady-state evaluation and dynamic performance simulation of the algorithm in general

  6. Research in Network Management Techniques for Tactical Data Communications Network.

    DTIC Science & Technology

    1982-09-01

    the control period. Research areas include Packet Network modelling, adaptive network routing, network design algorithms, network design techniques...contro!lers are designed to perform their limited tasks optimally. For the dynamic routing problem considered here, the local controllers are node...feedback to finding in optimum stead-o-state routing (static strategies) under non - control which can be easily implemented in real time. congested

  7. Approximation algorithms for scheduling unrelated parallel machines with release dates

    NASA Astrophysics Data System (ADS)

    Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.

    2017-01-01

    In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.

  8. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  9. Scheduling algorithm for mission planning and logistics evaluation users' guide

    NASA Technical Reports Server (NTRS)

    Chang, H.; Williams, J. M.

    1976-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) program is a mission planning tool composed of three subsystems; the mission payloads subsystem (MPLS), which generates a list of feasible combinations from a payload model for a given calendar year; GREEDY, which is a heuristic model used to find the best traffic model; and the operations simulation and resources scheduling subsystem (OSARS), which determines traffic model feasibility for available resources. The SAMPLE provides the user with options to allow the execution of MPLS, GREEDY, GREEDY-OSARS, or MPLS-GREEDY-OSARS.

  10. Real-time scheduling using minimum search

    NASA Technical Reports Server (NTRS)

    Tadepalli, Prasad; Joshi, Varad

    1992-01-01

    In this paper we consider a simple model of real-time scheduling. We present a real-time scheduling system called RTS which is based on Korf's Minimin algorithm. Experimental results show that the schedule quality initially improves with the amount of look-ahead search and tapers off quickly. So it sppears that reasonably good schedules can be produced with a relatively shallow search.

  11. Aspects of job scheduling

    NASA Technical Reports Server (NTRS)

    Phillips, K.

    1976-01-01

    A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.

  12. Diverse task scheduling for individualized requirements in cloud manufacturing

    NASA Astrophysics Data System (ADS)

    Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida

    2018-03-01

    Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.

  13. An Introduction to Sked

    NASA Technical Reports Server (NTRS)

    Gipson, John

    2010-01-01

    In this note I give an overview of the VLBI scheduling software sked. I describe some of the algorithms used in automatic scheduling and some sked commands which have been introduced at users requests. I also give a cookbook for generating some schedules.

  14. Future aircraft networks and schedules

    NASA Astrophysics Data System (ADS)

    Shu, Yan

    2011-07-01

    Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.

  15. The Autism Diagnostic Observation Schedule, Module 4: Application of the Revised Algorithms in an Independent, Well-Defined, Dutch Sample (n = 93).

    PubMed

    de Bildt, Annelies; Sytema, Sjoerd; Meffert, Harma; Bastiaansen, Jojanneke A C J

    2016-01-01

    This study examined the discriminative ability of the revised Autism Diagnostic Observation Schedule module 4 algorithm (Hus and Lord in J Autism Dev Disord 44(8):1996-2012, 2014) in 93 Dutch males with Autism Spectrum Disorder (ASD), schizophrenia, psychopathy or controls. Discriminative ability of the revised algorithm ASD cut-off resembled the original algorithm ASD cut-off: highly specific for psychopathy and controls, lower sensitivity than Hus and Lord (2014; i.e. ASD .61, AD .53). The revised algorithm AD cut-off improved sensitivity over the original algorithm. Discriminating ASD from schizophrenia was still challenging, but the better-balanced sensitivity (.53) and specificity (.78) of the revised algorithm AD cut-off may aide clinicians' differential diagnosis. Findings support using the revised algorithm, being conceptually conform the other modules, thus improving comparability across the lifespan.

  16. A hybrid online scheduling mechanism with revision and progressive techniques for autonomous Earth observation satellite

    NASA Astrophysics Data System (ADS)

    Li, Guoliang; Xing, Lining; Chen, Yingwu

    2017-11-01

    The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.

  17. Development of a job rotation scheduling algorithm for minimizing accumulated work load per body parts.

    PubMed

    Song, JooBong; Lee, Chaiwoo; Lee, WonJung; Bahn, Sangwoo; Jung, ChanJu; Yun, Myung Hwan

    2015-01-01

    For the successful implementation of job rotation, jobs should be scheduled systematically so that physical workload is evenly distributed with the use of various body parts. However, while the potential benefits are widely recognized by research and industry, there is still a need for a more effective and efficient algorithm that considers multiple work-related factors in job rotation scheduling. This study suggests a type of job rotation algorithm that aims to minimize musculoskeletal disorders with the approach of decreasing the overall workload. Multiple work characteristics are evaluated as inputs to the proposed algorithm. Important factors, such as physical workload on specific body parts, working height, involvement of heavy lifting, and worker characteristics such as physical disorders, are included in the algorithm. For evaluation of the overall workload in a given workplace, an objective function was defined to aggregate the scores from the individual factors. A case study, where the algorithm was applied at a workplace, is presented with an examination on its applicability and effectiveness. With the application of the suggested algorithm in case study, the value of the final objective function, which is the weighted sum of the workload in various body parts, decreased by 71.7% when compared to a typical sequential assignment and by 84.9% when compared to a single job assignment, which is doing one job all day. An algorithm was developed using the data from the ergonomic evaluation tool used in the plant and from the known factors related to workload. The algorithm was developed so that it can be efficiently applied with a small amount of required inputs, while covering a wide range of work-related factors. A case study showed that the algorithm was beneficial in determining a job rotation schedule aimed at minimizing workload across body parts.

  18. Scheduling multimedia services in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing

    2018-02-01

    Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.

  19. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  20. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    NASA Astrophysics Data System (ADS)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  1. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  2. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  3. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  4. Energy conservation in ad hoc multimedia networks using traffic-shaping mechanisms

    NASA Astrophysics Data System (ADS)

    Chandra, Surendar

    2003-12-01

    In this work, we explore network traffic shaping mechanisms that deliver packets at pre-determined intervals; allowing the network interface to transition to a lower power consuming sleep state. We focus our efforts on commodity devices, IEEE 802.11b ad hoc mode and popular streaming formats. We argue that factors such as the lack of scheduling clock phase synchronization among the participants and scheduling delays introduced by back ground tasks affect the potential energy savings. Increasing the periodic transmission delays to transmit data infrequently can offset some of these effects at the expense of flooding the wireless channel for longer periods of time; potentially increasing the time to acquire the channel for non-multimedia traffic. Buffering mechanisms built into media browsers can mitigate the effects of these added delays from being mis-interpreted as network congestion. We show that practical implementations of such traffic shaping mechanisms can offer significant energy savings.

  5. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  6. Comparison of Multiobjective Evolutionary Algorithms for Operations Scheduling under Machine Availability Constraints

    PubMed Central

    Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502

  7. Dynamic scheduling and planning parallel observations on large Radio Telescope Arrays with the Square Kilometre Array in mind

    NASA Astrophysics Data System (ADS)

    Buchner, Johannes

    2011-12-01

    Scheduling, the task of producing a time table for resources and tasks, is well-known to be a difficult problem the more resources are involved (a NP-hard problem). This is about to become an issue in Radio astronomy as observatories consisting of hundreds to thousands of telescopes are planned and operated. The Square Kilometre Array (SKA), which Australia and New Zealand bid to host, is aiming for scales where current approaches -- in construction, operation but also scheduling -- are insufficent. Although manual scheduling is common today, the problem is becoming complicated by the demand for (1) independent sub-arrays doing simultaneous observations, which requires the scheduler to plan parallel observations and (2) dynamic re-scheduling on changed conditions. Both of these requirements apply to the SKA, especially in the construction phase. We review the scheduling approaches taken in the astronomy literature, as well as investigate techniques from human schedulers and today's observatories. The scheduling problem is specified in general for scientific observations and in particular on radio telescope arrays. Also taken into account is the fact that the observatory may be oversubscribed, requiring the scheduling problem to be integrated with a planning process. We solve this long-term scheduling problem using a time-based encoding that works in the very general case of observation scheduling. This research then compares algorithms from various approaches, including fast heuristics from CPU scheduling, Linear Integer Programming and Genetic algorithms, Branch-and-Bound enumeration schemes. Measures include not only goodness of the solution, but also scalability and re-scheduling capabilities. In conclusion, we have identified a fast and good scheduling approach that allows (re-)scheduling difficult and changing problems by combining heuristics with a Genetic algorithm using block-wise mutation operations. We are able to explain and eradicate two problems in the literature: The inability of a GA to properly improve schedules and the generation of schedules with frequent interruptions. Finally, we demonstrate the scheduling framework for several operating telescopes: (1) Dynamic re-scheduling with the AUT Warkworth 12m telescope, (2) Scheduling for the Australian Mopra 22m telescope and scheduling for the Allen Telescope Array. Furthermore, we discuss the applicability of the presented scheduling framework to the Atacama Large Millimeter/submillimeter Array (ALMA, in construction) and the SKA. In particular, during the development phase of the SKA, this dynamic, scalable scheduling framework can accommodate changing conditions.

  8. Earthquake Early Warning: Real-time Testing of an On-site Method Using Waveform Data from the Southern California Seismic Network

    NASA Astrophysics Data System (ADS)

    Solanki, K.; Hauksson, E.; Kanamori, H.; Wu, Y.; Heaton, T.; Boese, M.

    2007-12-01

    We have implemented an on-site early warning algorithm using the infrastructure of the Caltech/USGS Southern California Seismic Network (SCSN). We are evaluating the real-time performance of the software system and the algorithm for rapid assessment of earthquakes. In addition, we are interested in understanding what parts of the SCSN need to be improved to make early warning practical. Our EEW processing system is composed of many independent programs that process waveforms in real-time. The codes were generated by using a software framework. The Pd (maximum displacement amplitude of P wave during the first 3sec) and Tau-c (a period parameter during the first 3 sec) values determined during the EEW processing are being forwarded to the California Integrated Seismic Network (CISN) web page for independent evaluation of the results. The on-site algorithm measures the amplitude of the P-wave (Pd) and the frequency content of the P-wave during the first three seconds (Tau-c). The Pd and the Tau-c values make it possible to discriminate between a variety of events such as large distant events, nearby small events, and potentially damaging nearby events. The Pd can be used to infer the expected maximum ground shaking. The method relies on data from a single station although it will become more reliable if readings from several stations are associated. To eliminate false triggers from stations with high background noise level, we have created per station Pd threshold configuration for the Pd/Tau-c algorithm. To determine appropriate values for the Pd threshold we calculate Pd thresholds for stations based on the information from the EEW logs. We have operated our EEW test system for about a year and recorded numerous earthquakes in the magnitude range from M3 to M5. Two recent examples are a M4.5 earthquake near Chatsworth and a M4.7 earthquake near Elsinore. In both cases, the Pd and Tau-c parameters were determined successfully within 10 to 20 sec of the arrival of the P-wave at the station. The Tau-c values predicted the magnitude within 0.1 and the predicted average peak-ground-motion was 0.7 cm/s and 0.6 cm/s. The delays in the system are caused mostly by the packetizing delay because our software system is based on processing miniseed packets. Most recently we have begun reducing the data latency using new qmaserv2 software for the Q330 Quanterra datalogger. We implemented qmaserv2 based multicast receiver software to receive the native 1 sec packets from the dataloggers. The receiver reads multicast packets from the network and writes them into shared memory area. This new software will fully take advantage of the capabilities of the Q330 datalogger and significantly reduce data latency for EEW system. We have also implemented a new EEW sub-system that compliments the currently running EEW system by associating Pd and Tau-c values from multiple stations. So far, we have implemented a new trigger generation algorithm for real-time processing for the sub-system, and are able to routinely locate events and determine magnitudes using the Pd and Tau-c values.

  9. A group-based tasks allocation algorithm for the optimization of long leave opportunities in academic departments

    NASA Astrophysics Data System (ADS)

    Eyono Obono, S. D.; Basak, Sujit Kumar

    2011-12-01

    The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.

  10. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology

    PubMed Central

    2017-01-01

    This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations. PMID:28459848

  11. Fuzzy Logic Based Anomaly Detection for Embedded Network Security Cyber Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Jason Wright

    Resiliency and security in critical infrastructure control systems in the modern world of cyber terrorism constitute a relevant concern. Developing a network security system specifically tailored to the requirements of such critical assets is of a primary importance. This paper proposes a novel learning algorithm for anomaly based network security cyber sensor together with its hardware implementation. The presented learning algorithm constructs a fuzzy logic rule based model of normal network behavior. Individual fuzzy rules are extracted directly from the stream of incoming packets using an online clustering algorithm. This learning algorithm was specifically developed to comply with the constrainedmore » computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental test-bed mimicking the environment of a critical infrastructure control system.« less

  12. HF band filter bank multi-carrier spread spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz

    Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less

  13. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  14. Dynamic Appliances Scheduling in Collaborative MicroGrids System

    PubMed Central

    Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid

    2017-01-01

    In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226

  15. Proposed algorithm to improve job shop production scheduling using ant colony optimization method

    NASA Astrophysics Data System (ADS)

    Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari

    2017-12-01

    This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.

  16. Balancing Contention and Synchronization on the Intel Paragon

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Nicol, David M.

    1996-01-01

    The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.

  17. Scheduling quality of precise form sets which consist of tasks of circular type in GRID systems

    NASA Astrophysics Data System (ADS)

    Saak, A. E.; Kureichik, V. V.; Kravchenko, Y. A.

    2018-05-01

    Users’ demand in computer power and rise of technology favour the arrival of Grid systems. The quality of Grid systems’ performance depends on computer and time resources scheduling. Grid systems with a centralized structure of the scheduling system and user’s task are modeled by resource quadrant and re-source rectangle accordingly. A Non-Euclidean heuristic measure, which takes into consideration both the area and the form of an occupied resource region, is used to estimate scheduling quality of heuristic algorithms. The authors use sets, which are induced by the elements of square squaring, as an example of studying the adapt-ability of a level polynomial algorithm with an excess and the one with minimal deviation.

  18. [Wavelet packet extraction and entropy analysis of telemetry EEG from the prelimbic cortex of medial prefrontal cortex in morphine-induced CPP rats].

    PubMed

    Bai, Yu; Bai, Jia-Ming; Li, Jing; Li, Min; Yu, Ran; Pan, Qun-Wan

    2014-12-25

    The purpose of the present study is to analyze the relationship between the telemetry electroencephalogram (EEG) changes of the prelimbic (PL) cortex and the drug-seeking behavior of morphine-induced conditioned place preference (CPP) rats by using the wavelet packet extraction and entropy measurement. The recording electrode was stereotactically implanted into the PL cortex of rats. The animals were then divided randomly into operation-only control and morphine-induced CPP groups, respectively. A CPP video system in combination with an EEG wireless telemetry device was used for recording EEG of PL cortex when the rats shuttled between black-white or white-black chambers. The telemetry recorded EEGs were analyzed by wavelet packet extraction, Welch power spectrum estimate, normalized amplitude and Shannon entropy algorithm. The results showed that, compared with operation-only control group, the left PL cortex's EEG of morphine-induced CPP group during black-white chamber shuttling exhibited the following changes: (1) the amplitude of average EEG for each frequency bands extracted by wavelet packet was reduced; (2) the Welch power intensity was increased significantly in 10-50 Hz EEG band (P < 0.01 or P < 0.05); (3) Shannon entropy was increased in β, γ₁, and γ₂waves of the EEG (P < 0.01 or P < 0.05); and (4) the average information entropy was reduced (P < 0.01). The results suggest that above mentioned EEG changes in morphine-induced CPP group rat may be related to animals' drug-seeking motivation and behavior launching.

  19. Establishment of a definitive protocol for the diagnosis and management of body packers (drug mules).

    PubMed

    Mandava, Nageswara; Chang, Richard S; Wang, John H; Bertocchi, Michael; Yrad, Jonathan; Allamaneni, Shyam; Aboian, Edouard; Lall, Malini H; Mariano, Rosalind; Richards, Neil

    2011-02-01

    'Mules' or body packers are people who transport illegal drugs by packet ingestion into the gastrointestinal tract. These people are otherwise healthy and their management should maintain minimal morbidity. In this study, experience with body packers is presented and an algorithm for conservative and surgical management is provided. The clinical patient database for all body packer admissions at Mary Immaculate Hospital of the Caritas Health Care Inc. from 1993 to 2005 was interrogated. 56 patients (4.5%) required admission out of a total of 1250 subjects confirmed to be body packers and apprehended by United State Customs officials at JFK International Airport, New York. The retrieved patient data were analysed retrospectively. 70% of the body packers were men, with a male to female ratio of 2.8 to 1. The mean age was 33 years and 52% were from Columbia. Heroin was the most common illegally transported substance (73%). 25 patients (45%) required surgical intervention, whereas 31 patients (55%) were successfully managed conservatively. Indications for intervention included: bowel obstruction, packet rupture/toxicity, and delayed progression of packet transit on conservative management. Multiple intraoperative manoeuvres were used to remove the foreign bodies: gastrotomy, enterotomy and colotomy. Wound infection was the most common complication and is associated with distal enterotomy and colotomy. Men were more likely to present as body packers than women. Proximal enterotomies are preferred and multiple enterotomies should be avoided. A confirmatory radiological study is needed to demonstrate complete clearance of packets. A systematic protocol for the management of body packers results in minimal morbidity and no mortality.

  20. Minimization of Delay Costs in the Realization of Production Orders in Two-Machine System

    NASA Astrophysics Data System (ADS)

    Dylewski, Robert; Jardzioch, Andrzej; Dworak, Oliver

    2018-03-01

    The article presents a new algorithm that enables the allocation of the optimal scheduling of the production orders in the two-machine system based on the minimum cost of order delays. The formulated algorithm uses the method of branch and bounds and it is a particular generalisation of the algorithm enabling for the determination of the sequence of the production orders with the minimal sum of the delays. In order to illustrate the proposed algorithm in the best way, the article contains examples accompanied by the graphical trees of solutions. The research analysing the utility of the said algorithm was conducted. The achieved results proved the usefulness of the proposed algorithm when applied to scheduling of orders. The formulated algorithm was implemented in the Matlab programme. In addition, the studies for different sets of production orders were conducted.

  1. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  2. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  3. Simulation for Dynamic Situation Awareness and Prediction III

    DTIC Science & Technology

    2010-03-01

    source Java ™ library for capturing and sending network packets; 4) Groovy – an open source, Java -based scripting language (version 1.6 or newer). Open...DMOTH Analyzer application. Groovy is an open source dynamic scripting language for the Java Virtual Machine. It is consistent with Java syntax...between temperature, pressure, wind and relative humidity, and 3) a precipitation editing algorithm. The Editor can be used to prepare scripted changes

  4. Power-efficient distributed resource allocation under goodput QoS constraints for heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Andreotti, Riccardo; Del Fiorentino, Paolo; Giannetti, Filippo; Lottici, Vincenzo

    2016-12-01

    This work proposes a distributed resource allocation (RA) algorithm for packet bit-interleaved coded OFDM transmissions in the uplink of heterogeneous networks (HetNets), characterized by small cells deployed over a macrocell area and sharing the same band. Every user allocates its transmission resources, i.e., bits per active subcarrier, coding rate, and power per subcarrier, to minimize the power consumption while both guaranteeing a target quality of service (QoS) and accounting for the interference inflicted by other users transmitting over the same band. The QoS consists of the number of information bits delivered in error-free packets per unit of time, or goodput (GP), estimated at the transmitter by resorting to an efficient effective SNR mapping technique. First, the RA problem is solved in the point-to-point case, thus deriving an approximate yet accurate closed-form expression for the power allocation (PA). Then, the interference-limited HetNet case is examined, where the RA problem is described as a non-cooperative game, providing a solution in terms of generalized Nash equilibrium. Thanks to the closed-form of the PA, the solution analysis is based on the best response concept. Hence, sufficient conditions for existence and uniqueness of the solution are analytically derived, along with a distributed algorithm capable of reaching the game equilibrium.

  5. Genetic algorithm for the optimization of features and neural networks in ECG signals classification

    NASA Astrophysics Data System (ADS)

    Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu

    2017-01-01

    Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.

  6. An activity recognition model using inertial sensor nodes in a wireless sensor network for frozen shoulder rehabilitation exercises.

    PubMed

    Lin, Hsueh-Chun; Chiang, Shu-Yin; Lee, Kai; Kan, Yao-Chiang

    2015-01-19

    This paper proposes a model for recognizing motions performed during rehabilitation exercises for frozen shoulder conditions. The model consists of wearable wireless sensor network (WSN) inertial sensor nodes, which were developed for this study, and enables the ubiquitous measurement of bodily motions. The model employs the back propagation neural network (BPNN) algorithm to compute motion data that are formed in the WSN packets; herein, six types of rehabilitation exercises were recognized. The packets sent by each node are converted into six components of acceleration and angular velocity according to three axes. Motor features such as basic acceleration, angular velocity, and derivative tilt angle were input into the training procedure of the BPNN algorithm. In measurements of thirteen volunteers, the accelerations and included angles of nodes were adopted from possible features to demonstrate the procedure. Five exercises involving simple swinging and stretching movements were recognized with an accuracy of 85%-95%; however, the accuracy with which exercises entailing spiral rotations were recognized approximately 60%. Thus, a characteristic space and enveloped spectrum improving derivative features were suggested to enable identifying customized parameters. Finally, a real-time monitoring interface was developed for practical implementation. The proposed model can be applied in ubiquitous healthcare self-management to recognize rehabilitation exercises.

  7. A two-hop based adaptive routing protocol for real-time wireless sensor networks.

    PubMed

    Rachamalla, Sandhya; Kancherla, Anitha Sheela

    2016-01-01

    One of the most important and challenging issues in wireless sensor networks (WSNs) is to optimally manage the limited energy of nodes without degrading the routing efficiency. In this paper, we propose an energy-efficient adaptive routing mechanism for WSNs, which saves energy of nodes by removing the much delayed packets without degrading the real-time performance of the used routing protocol. It uses the adaptive transmission power algorithm which is based on the attenuation of the wireless link to improve the energy efficiency. The proposed routing mechanism can be associated with any geographic routing protocol and its performance is evaluated by integrating with the well known two-hop based real-time routing protocol, PATH and the resulting protocol is energy-efficient adaptive routing protocol (EE-ARP). The EE-ARP performs well in terms of energy consumption, deadline miss ratio, packet drop and end-to-end delay.

  8. Evaluation Study of a Wireless Multimedia Traffic-Oriented Network Model

    NASA Astrophysics Data System (ADS)

    Vasiliadis, D. C.; Rizos, G. E.; Vassilakis, C.

    2008-11-01

    In this paper, a wireless multimedia traffic-oriented network scheme over a fourth generation system (4-G) is presented and analyzed. We conducted an extensive evaluation study for various mobility configurations in order to incorporate the behavior of the IEEE 802.11b standard over a test-bed wireless multimedia network model. In this context, the Quality of Services (QoS) over this network is vital for providing a reliable high-bandwidth platform for data-intensive sources like video streaming. Therefore, the main issues concerned in terms of QoS were the metrics for bandwidth of both dropped and lost packets and their mean packet delay under various traffic conditions. Finally, we used a generic distance-vector routing protocol which was based on an implementation of Distributed Bellman-Ford algorithm. The performance of the test-bed network model has been evaluated by using the simulation environment of NS-2.

  9. The use of wavelet packet transform and artificial neural networks in analysis and classification of dysphonic voices.

    PubMed

    Crovato, César David Paredes; Schuck, Adalberto

    2007-10-01

    This paper presents a dysphonic voice classification system using the wavelet packet transform and the best basis algorithm (BBA) as dimensionality reductor and 06 artificial neural networks (ANN) acting as specialist systems. Each ANN was a 03-layer multilayer perceptron with 64 input nodes, 01 output node and in the intermediary layer the number of neurons depends on the related training pathology group. The dysphonic voice database was separated in five pathology groups and one healthy control group. Each ANN was trained and associated with one of the 06 groups, and fed by the best base tree (BBT) nodes' entropy values, using the multiple cross validation (MCV) method and the leave-one-out (LOO) variation technique and success rates obtained were 87.5%, 95.31%, 87.5%, 100%, 96.87% and 89.06% for the groups 01 to 06, respectively.

  10. A Gaussian Wave Packet Propagation Approach to Vibrationally Resolved Optical Spectra at Non-Zero Temperatures.

    PubMed

    Reddy, Ch Sridhar; Prasad, M Durga

    2016-04-28

    An effective time dependent approach based on a method that is similar to the Gaussian wave packet propagation (GWP) technique of Heller is developed for the computation of vibrationally resolved electronic spectra at finite temperatures in the harmonic, Franck-Condon/Hertzberg-Teller approximations. Since the vibrational thermal density matrix of the ground electronic surface and the time evolution operator on that surface commute, it is possible to write the spectrum generating correlation function as a trace of the time evolved doorway state. In the stated approximations, the doorway state is a superposition of the harmonic oscillator zero and one quantum eigenfunctions and thus can be propagated by the GWP. The algorithm has an O(N(3)) dependence on the number of vibrational modes. An application to pyrene absorption spectrum at two temperatures is presented as a proof of the concept.

  11. Scattering of an electronic wave packet by a one-dimensional electron-phonon-coupled structure

    NASA Astrophysics Data System (ADS)

    Brockt, C.; Jeckelmann, E.

    2017-02-01

    We investigate the scattering of an electron by phonons in a small structure between two one-dimensional tight-binding leads. This model mimics the quantum electron transport through atomic wires or molecular junctions coupled to metallic leads. The electron-phonon-coupled structure is represented by the Holstein model. We observe permanent energy transfer from the electron to the phonon system (dissipation), transient self-trapping of the electron in the electron-phonon-coupled structure (due to polaron formation and multiple reflections at the structure edges), and transmission resonances that depend strongly on the strength of the electron-phonon coupling and the adiabaticity ratio. A recently developed TEBD algorithm, optimized for bosonic degrees of freedom, is used to simulate the quantum dynamics of a wave packet launched against the electron-phonon-coupled structure. Exact results are calculated for a single electron-phonon site using scattering theory and analytical approximations are obtained for limiting cases.

  12. A new method for solving the quantum hydrodynamic equations of motion: application to two-dimensional reactive scattering.

    PubMed

    Pauler, Denise K; Kendrick, Brian K

    2004-01-08

    The de Broglie-Bohm hydrodynamic equations of motion are solved using a meshless method based on a moving least squares approach and an arbitrary Lagrangian-Eulerian frame of reference. A regridding algorithm adds and deletes computational points as needed in order to maintain a uniform interparticle spacing, and unitary time evolution is obtained by propagating the wave packet using averaged fields. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. The methodology is applied to a two-dimensional model collinear reaction with an activation barrier. Reaction probabilities are computed as a function of both time and energy, and are in excellent agreement with those based on the quantum trajectory method. (c) 2004 American Institute of Physics

  13. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  14. An Enabling Technology for New Planning and Scheduling Paradigms

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth

    2004-01-01

    The Night Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called ?ask models," from the scientists and technologists for the tasks that are to be scheduled. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next, a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, the models are modified to be compatible with the scheduling engine. Then the models are submitted to the scheduling engine for automatic scheduling or, when requirements are expressed in notes, the timeline is built manually. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components: (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphical methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the intervention of a scheduling expert. The algorithm is tuned for the constraint hierarchies and the complex temporal relationships provided by the modeling schema. It has an extensive search algorithm that can exploit timing flexibilities and constraint and relationship options. (3) An innovative architecture allows multiple remote users to simultaneously model science and technology requirements and other users to model vehicle and hardware characteristics. The architecture allows the remote users to submit scheduling requests directly to the scheduling engine and immediately see the results. These three components are integrated so that science and technology experts with no knowledge of the vehicle or hardware subsystems and no knowledge of the internal workings of the scheduling engine have the ability to build and submit scheduling requests and see the results. The immediate feedback will hone the users' modeling skills and ultimately enable them to produce the desired timeline. This paper summarizes the three components of the enabling technology and describes how this technology would make a new paradigm possible.

  15. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  16. Scheduling Dependent Real-Time Activities

    DTIC Science & Technology

    1990-08-01

    dependency relationships in a way that is suitable for all real - time systems . This thesis provides an algorithm, called DASA, that is effective for...scheduling the class of real - time systems known as supervisory control systems. Simulation experiments that account for the time required to make scheduling

  17. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  18. A collaborative scheduling model for the supply-hub with multiple suppliers and multiple manufacturers.

    PubMed

    Li, Guo; Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  19. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    PubMed Central

    Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104

  20. Wireless Sensor Node Power Profiling Based on IEEE 802.11 and IEEE 802.15.4 Communication Protocols. Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Richardson, Joseph; Zhang, Yanliang

    Most wireless sensor network (comprising of thousands of WSNs) applications require operation over extended periods of time beginning with their deployment. Network lifetime is extremely critical for most applications and is one of the limiting factors for energy-constrained networks. Based on applications, there are wide ranges of different energy sources suitable for powering WSNs. A battery is traditionally used to power WSNs. The deployed WSN is required to last for long time. Due to finite amount of energy present in batteries, it is not feasible to replace batteries. Recently there has been a new surge in the area of energymore » harvesting were ambient energy in the environment can be utilized to prolong the lifetime of WSNs. Some of the sources of ambient energies are solar power, thermal gradient, human motion and body heat, vibrations, and ambient RF energy. The design and development of TEGs to power WSNs that would remain active for a long period of time requires comprehensive understanding of WSN operational. This motivates the research in modeling the lifetime, i.e., power consumption, of a WSN by taking into consideration various node and network level activities. A WSN must perform three essential tasks: sense events, perform quick local information processing of sensed events, and wirelessly exchange locally processed data with the base station or with other WSNs in the network. Each task has a power cost per unit tine and an additional cost when switching between tasks. There are number of other considerations that must also be taken into account when computing the power consumption associated with each task. The considerations includes: number of events occurring in a fixed active time period and the duration of each event, event-information processing time, total communication time, number of retransmission, etc. Additionally, at the network level the communication of information data packets between WSNs involves collisions, latency, and retransmission, which result in unanticipated power losses. This report focuses rigorous stochastic modeling of power demand for a schedule-driven WSN utilizing Institute of Electrical and Electronics Engineers 802.11 and 802.15.4 communication protocols. The model captures the generic operation of a schedule-driven WSN when an external event occurs, i.e., sensing, following by processing, and followed by communication. The report will present development of an expression to compute the expected energy consumption per operational cycle of a schedule-driven WSN by taking into consideration the node level activities, i.e., sensing and processing, and the network level activities, i.e., channel access, packet collision, retransmission attempts, and transmission of a data packet.« less

Top