Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Distributive routing and congestion control in wireless multihop ad hoc communication networks
NASA Astrophysics Data System (ADS)
Glauche, Ingmar; Krause, Wolfram; Sollacher, Rudolf; Greiner, Martin
2004-10-01
Due to their inherent complexity, engineered wireless multihop ad hoc communication networks represent a technological challenge. Having no mastering infrastructure the nodes have to selforganize themselves in such a way that for example network connectivity, good data traffic performance and robustness are guaranteed. In this contribution the focus is on routing and congestion control. First, random data traffic along shortest path routes is studied by simulations as well as theoretical modeling. Measures of congestion like end-to-end time delay and relaxation times are given. A scaling law of the average time delay with respect to network size is revealed and found to depend on the underlying network topology. In the second step, a distributive routing and congestion control is proposed. Each node locally propagates its routing cost estimates and information about its congestion state to its neighbors, which then update their respective cost estimates. This allows for a flexible adaptation of end-to-end routes to the overall congestion state of the network. Compared to shortest-path routing, the critical network load is significantly increased.
pathChirp: Efficient Available Bandwidth Estimation for Network Paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cottrell, Les
2003-04-30
This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less
An improved global dynamic routing strategy for scale-free network with tunable clustering
NASA Astrophysics Data System (ADS)
Sun, Lina; Huang, Ning; Zhang, Yue; Bai, Yannan
2016-08-01
An efficient routing strategy can deliver packets quickly to improve the network capacity. Node congestion and transmission path length are inevitable real-time factors for a good routing strategy. Existing dynamic global routing strategies only consider the congestion of neighbor nodes and the shortest path, which ignores other key nodes’ congestion on the path. With the development of detection methods and techniques, global traffic information is readily available and important for the routing choice. Reasonable use of this information can effectively improve the network routing. So, an improved global dynamic routing strategy is proposed, which considers the congestion of all nodes on the shortest path and incorporates the waiting time of the most congested node into the path. We investigate the effectiveness of the proposed routing for scale-free network with different clustering coefficients. The shortest path routing strategy and the traffic awareness routing strategy only considering the waiting time of neighbor node are analyzed comparatively. Simulation results show that network capacity is greatly enhanced compared with the shortest path; congestion state increase is relatively slow compared with the traffic awareness routing strategy. Clustering coefficient increase will not only reduce the network throughput, but also result in transmission average path length increase for scale-free network with tunable clustering. The proposed routing is favorable to ease network congestion and network routing strategy design.
Congestion patterns of electric vehicles with limited battery capacity.
Jing, Wentao; Ramezani, Mohsen; An, Kun; Kim, Inhi
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm.
Congestion patterns of electric vehicles with limited battery capacity
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. PMID:29543875
Effective use of congestion in complex networks
NASA Astrophysics Data System (ADS)
Echagüe, Juan; Cholvi, Vicent; Kowalski, Dariusz R.
2018-03-01
In this paper, we introduce a congestion-aware routing protocol that selects the paths according to the congestion of nodes in the network. The aim is twofold: on one hand, and in order to prevent the networks from collapsing, it provides a good tolerance to nodes' overloads; on the other hand, and in order to guarantee efficient communication, it also incentivize the routes to follow short paths. We analyze the performance of our proposed routing strategy by means of a series of experiments carried out by using simulations. We show that it provides a tolerance to collapse close to the optimal value. Furthermore, the average length of the paths behaves optimally up to the certain value of packet generation rate ρ and it grows in a linear fashion with the increase of ρ.
A real-time path rating calculation tool powered by HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
If transmission path ratings are determined in real time and optimized control methods can be implemented, congestion problems can be more effectively managed using the existing transmission assets, reducing congestion costs, avoiding capital expenditures for new physical assets, increasing revenues from the existing system, and maintaining reliability. In just one illustrative case, a BPA study has shown that a 1000-MW rating increase for a transmission path generates $15M in annual revenue, even if only 25% of the increased margin can be tapped for just 25% of the year.
Auctionable fixed transmission rights for congestion management
NASA Astrophysics Data System (ADS)
Alomoush, Muwaffaq Irsheid
Electric power deregulation has proposed a major change to the regulated utility monopoly. The change manifests the main part of engineers' efforts to reshape three components of today's regulated monopoly: generation, distribution and transmission. In this open access deregulated power market, transmission network plays a major role, and transmission congestion is a major problem that requires further consideration especially when inter-zonal/intra-zonal scheme is implemented. Declaring that engineering studies and experience are the criteria to define zonal boundaries or defining a zone based on the fact that a zone is a densely interconnected area (lake) and paths connecting these densely interconnected areas are inter-zonal lines will render insufficient and fuzzy definitions. Moreover, a congestion problem formulation should take into consideration interactions between intra-zonal and inter-zonal flows and their effects on power systems. In this thesis, we introduce a procedure for minimizing the number of adjustments of preferred schedules to alleviate congestion and apply control schemes to minimize interactions between zones. In addition, we give the zone definition a certain criterion based on the Locational Marginal Price (LMP). This concept will be used to define congestion zonal boundaries and to decide whether any zone should be merged with another zone or split into new zones. The thesis presents a unified scheme that combines zonal and FTR schemes to manage congestion. This combined scheme is utilized with LMPs to define zonal boundaries more appropriately. The presented scheme gains the best features of the FTR scheme, which are providing financial certainty, maximizing the efficient use of the system and making users pay for the actual use of congested paths. LMPs may give an indication of the impact of wheeling transactions, and calculations of and comparisons of LMPs with and without wheeling transactions should be adequate criteria to approve the transaction by the ISO, take a decision to expand the existing system, or retain the original structure of the system. Also, the thesis investigates the impact of wheeling transactions on congestion management, where we present a generalized mathematical model for the Fixed Transmission Right (FTR) auction. The auction guarantees FTR availability to all participants on a non-discriminatory basis, in which system users are permitted to buy, sell and trade FTRs through an auction. When FTRs are utilized with LMPs, they increase the efficient use of the transmission system and let a transmission customer gain advantageous features such as acquiring a mechanism to offset the extra cost due to congestion charges, providing financial and operational certainty and making system users pay for the actual use of the congested paths. The thesis also highlighted FTR trading in secondary markets to self-arrange access across different paths, create long-term transmission rights and provide more commercial certainty.
Adapting End Host Congestion Control for Mobility
NASA Technical Reports Server (NTRS)
Eddy, Wesley M.; Swami, Yogesh P.
2005-01-01
Network layer mobility allows transport protocols to maintain connection state, despite changes in a node's physical location and point of network connectivity. However, some congestion-controlled transport protocols are not designed to deal with these rapid and potentially significant path changes. In this paper we demonstrate several distinct problems that mobility-induced path changes can create for TCP performance. Our premise is that mobility events indicate path changes that require re-initialization of congestion control state at both connection end points. We present the application of this idea to TCP in the form of a simple solution (the Lightweight Mobility Detection and Response algorithm, that has been proposed in the IETF), and examine its effectiveness. In general, we find that the deficiencies presented are both relatively easily and painlessly fixed using this solution. We also find that this solution has the counter-intuitive property of being both more friendly to competing traffic, and simultaneously more aggressive in utilizing newly available capacity than unmodified TCP.
NASA Astrophysics Data System (ADS)
Huang, Jinhui; Liu, Wenxiang; Su, Yingxue; Wang, Feixue
2018-02-01
Space networks, in which connectivity is deterministic and intermittent, can be modeled by delay/disruption tolerant networks. In space delay/disruption tolerant networks, a packet is usually transmitted from the source node to the destination node indirectly via a series of relay nodes. If anyone of the nodes in the path becomes congested, the packet will be dropped due to buffer overflow. One of the main reasons behind congestion is the unbalanced network traffic distribution. We propose a load balancing strategy which takes the congestion status of both the local node and relay nodes into account. The congestion status, together with the end-to-end delay, is used in the routing selection. A lookup-table enhancement is also proposed. The off-line computation and the on-line adjustment are combined together to make a more precise estimate of the end-to-end delay while at the same time reducing the onboard computation. Simulation results show that the proposed strategy helps to distribute network traffic more evenly and therefore reduces the packet drop ratio. In addition, the average delay is also decreased in most cases. The lookup-table enhancement provides a compromise between the need for better communication performance and the desire for less onboard computation.
Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network. PMID:28481245
Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-05-06
The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network.
A novel communication mechanism based on node potential multi-path routing
NASA Astrophysics Data System (ADS)
Bu, Youjun; Zhang, Chuanhao; Jiang, YiMing; Zhang, Zhen
2016-10-01
With the network scales rapidly and new network applications emerge frequently, bandwidth supply for today's Internet could not catch up with the rapid increasing requirements. Unfortunately, irrational using of network sources makes things worse. Actual network deploys single-next-hop optimization paths for data transmission, but such "best effort" model leads to the imbalance use of network resources and usually leads to local congestion. On the other hand Multi-path routing can use the aggregation bandwidth of multi paths efficiently and improve the robustness of network, security, load balancing and quality of service. As a result, multi-path has attracted much attention in the routing and switching research fields and many important ideas and solutions have been proposed. This paper focuses on implementing the parallel transmission of multi next-hop data, balancing the network traffic and reducing the congestion. It aimed at exploring the key technologies of the multi-path communication network, which could provide a feasible academic support for subsequent applications of multi-path communication networking. It proposed a novel multi-path algorithm based on node potential in the network. And the algorithm can fully use of the network link resource and effectively balance network link resource utilization.
Mobility and Congestion in Dynamical Multilayer Networks with Finite Storage Capacity
NASA Astrophysics Data System (ADS)
Manfredi, S.; Di Tucci, E.; Latora, V.
2018-02-01
Multilayer networks describe well many real interconnected communication and transportation systems, ranging from computer networks to multimodal mobility infrastructures. Here, we introduce a model in which the nodes have a limited capacity of storing and processing the agents moving over a multilayer network, and their congestions trigger temporary faults which, in turn, dynamically affect the routing of agents seeking for uncongested paths. The study of the network performance under different layer velocities and node maximum capacities reveals the existence of delicate trade-offs between the number of served agents and their time to travel to destination. We provide analytical estimates of the optimal buffer size at which the travel time is minimum and of its dependence on the velocity and number of links at the different layers. Phenomena reminiscent of the slower is faster effect and of the Braess' paradox are observed in our dynamical multilayer setup.
Mobility and Congestion in Dynamical Multilayer Networks with Finite Storage Capacity.
Manfredi, S; Di Tucci, E; Latora, V
2018-02-09
Multilayer networks describe well many real interconnected communication and transportation systems, ranging from computer networks to multimodal mobility infrastructures. Here, we introduce a model in which the nodes have a limited capacity of storing and processing the agents moving over a multilayer network, and their congestions trigger temporary faults which, in turn, dynamically affect the routing of agents seeking for uncongested paths. The study of the network performance under different layer velocities and node maximum capacities reveals the existence of delicate trade-offs between the number of served agents and their time to travel to destination. We provide analytical estimates of the optimal buffer size at which the travel time is minimum and of its dependence on the velocity and number of links at the different layers. Phenomena reminiscent of the slower is faster effect and of the Braess' paradox are observed in our dynamical multilayer setup.
A Benes-like theorem for the shuffle-exchange graph
NASA Technical Reports Server (NTRS)
Schwabe, Eric J.
1992-01-01
One of the first theorems on permutation routing, proved by V. E. Beness (1965), shows that given a set of source-destination pairs in an N-node butterfly network with at most a constant number of sources or destinations in each column of the butterfly, there exists a set of paths of lengths O(log N) connecting each pair such that the total congestion is constant. An analogous theorem yielding constant-congestion paths for off-line routing in the shuffle-exchange graph is proved here. The necklaces of the shuffle-exchange graph play the same structural role as the columns of the butterfly in Beness' theorem.
The impact of self-driving cars on existing transportation networks
NASA Astrophysics Data System (ADS)
Ji, Xiang
2018-04-01
In this paper, considering the usage of self-driving, I research the congestion problems of traffic networks from both macro and micro levels. Firstly, the macroscopic mathematical model is established using the Greenshields function, analytic hierarchy process and Monte Carlo simulation, where the congestion level is divided into five levels according to the average vehicle speed. The roads with an obvious congestion situation is investigated mainly and the traffic flow and topology of the roads are analyzed firstly. By processing the data, I propose a traffic congestion model. In the model, I assume that half of the non-self-driving cars only take the shortest route and the other half can choose the path randomly. While self-driving cars can obtain vehicle density data of each road and choose the path more reasonable. When the path traffic density exceeds specific value, it cannot be selected. To overcome the dimensional differences of data, I rate the paths by BORDA sorting. The Monte Carlo simulation of Cellular Automaton is used to obtain the negative feedback information of the density of the traffic network, where the vehicles are added into the road network one by one. I then analyze the influence of negative feedback information on path selection of intelligent cars. The conclusion is that the increase of the proportion of intelligent vehicles will make the road load more balanced, and the self-driving cars can avoid the peak and reduce the degree of road congestion. Combined with other models, the optimal self-driving ratio is about sixty-two percent. From the microscopic aspect, by using the single-lane traffic NS rule, another model is established to analyze the road Partition scheme. The self-driving traffic is more intelligent, and their cooperation can reduce the random deceleration probability. By the model, I get the different self-driving ratio of space-time distribution. I also simulate the case of making a lane separately for self-driving, compared to the former model. It is concluded that a single lane is more efficient in a certain interval. However, it is not recommended to offer a lane separately. However, the self-driving also faces the problem of hacker attacks and greater damage after fault. So, when self-driving ratio is higher than a certain value, the increase of traffic flow rate is small. In this article, that value is discussed, and the optimal proportion is determined. Finally, I give a nontechnical explanation of the problem.
Extended shortest path selection for package routing of complex networks
NASA Astrophysics Data System (ADS)
Ye, Fan; Zhang, Lei; Wang, Bing-Hong; Liu, Lu; Zhang, Xing-Yi
The routing strategy plays a very important role in complex networks such as Internet system and Peer-to-Peer networks. However, most of the previous work concentrates only on the path selection, e.g. Flooding and Random Walk, or finding the shortest path (SP) and rarely considering the local load information such as SP and Distance Vector Routing. Flow-based Routing mainly considers load balance and still cannot achieve best optimization. Thus, in this paper, we propose a novel dynamic routing strategy on complex network by incorporating the local load information into SP algorithm to enhance the traffic flow routing optimization. It was found that the flow in a network is greatly affected by the waiting time of the network, so we should not consider only choosing optimized path for package transformation but also consider node congestion. As a result, the packages should be transmitted with a global optimized path with smaller congestion and relatively short distance. Analysis work and simulation experiments show that the proposed algorithm can largely enhance the network flow with the maximum throughput within an acceptable calculating time. The detailed analysis of the algorithm will also be provided for explaining the efficiency.
An efficient routing strategy for traffic dynamics on two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Wang, Huiling; Zhang, Zhuxi; Zhang, Yi; Duan, Congwen; Qi, Zhaohui; Liu, Yu
2018-05-01
In order to alleviate traffic congestion on multilayer networks, designing an efficient routing strategy is one of the most important ways. In this paper, a novel routing strategy is proposed to reduce traffic congestion on two-layer networks. In the proposed strategy, the optimal paths in the physical layer are chosen by comprehensively considering the roles of nodes’ degrees of the two layers. Both numerical and analytical results indicate that our routing strategy can reasonably redistribute the traffic load of the physical layer, and thus the traffic capacity of two-layer complex networks are significantly enhanced compared with the shortest path routing (SPR) and the global awareness routing (GAR) strategies. This study may shed some light on the optimization of networked traffic dynamics.
RPT: A Low Overhead Single-End Probing Tool for Detecting Network Congestion Positions
2003-12-20
complete evaluation on the Internet , we need to know the real available bandwidth on all the links of a network path. But that information is hard to...School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Detecting the points of network congestion is an intriguing...research problem, because this infor- mation can benefit both regular network users and Internet Service Providers. This is also a highly challenging
Congestion estimation technique in the optical network unit registration process.
Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk
2016-07-01
We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Improved efficient routing strategy on two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Han, Weizhan; Guo, Qing; Zhang, Shuai; Wang, Junfang; Wang, Zhihao
2016-10-01
The traffic dynamics of multi-layer networks has become a hot research topic since many networks are comprised of two or more layers of subnetworks. Due to its low traffic capacity, the traditional shortest path routing (SPR) protocol is susceptible to congestion on two-layer complex networks. In this paper, we propose an efficient routing strategy named improved global awareness routing (IGAR) strategy which is based on the betweenness centrality of nodes in the two layers. With the proposed strategy, the routing paths can bypass hub nodes of both layers to enhance the transport efficiency. Simulation results show that the IGAR strategy can bring much better traffic capacity than the SPR and the global awareness routing (GAR) strategies. Because of the significantly improved traffic performance, this study is helpful to alleviate congestion of the two-layer complex networks.
Cogestion and recreation site demand: a model of demand-induced quality effects
Douglas, Aaron J.; Johnson, Richard L.
1993-01-01
This analysis focuses on problems of estimating site-specific dollar benefits conferred by outdoor recreation sites in the face of congestion costs. Encounters, crowding effects and congestion costs have often been treated by natural resource economists in a piecemeal fashion. In the current paper, encounters and crowding effects are treated systematically. We emphasize the quantitative impact of congestion costs on site-specific estimates of benefits conferred by improvements in outdoor recreation sites. The principal analytic conclusion is that techniques that streamline on data requirements produce biased estimates of benefits conferred by site improvements at facilities with significant crowding effects. The principal policy recommendation is that the Federal and state agencies should collect and store information on visitation rates, encounter levels and congestion costs at various outdoor recreation sites.
DOT National Transportation Integrated Search
2010-03-01
Incidents account for a large portion of all congestion and a need clearly exists for tools to predict and estimate incident effects. This study examined (1) congestion back propagation to estimate the length of the queue and travel time from upstrea...
Topics on data transmission problem in software definition network
NASA Astrophysics Data System (ADS)
Gao, Wei; Liang, Li; Xu, Tianwei; Gan, Jianhou
2017-08-01
In normal computer networks, the data transmission between two sites go through the shortest path between two corresponding vertices. However, in the setting of software definition network (SDN), it should monitor the network traffic flow in each site and channel timely, and the data transmission path between two sites in SDN should consider the congestion in current networks. Hence, the difference of available data transmission theory between normal computer network and software definition network is that we should consider the prohibit graph structures in SDN, and these forbidden subgraphs represent the sites and channels in which data can't be passed by the serious congestion. Inspired by theoretical analysis of an available data transmission in SDN, we consider some computational problems from the perspective of the graph theory. Several results determined in the paper imply the sufficient conditions of data transmission in SDN in the various graph settings.
Probabilistic computer model of optimal runway turnoffs
NASA Technical Reports Server (NTRS)
Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.
1985-01-01
Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.
An evaluation of congestion-reducing measures used in Virginia.
DOT National Transportation Integrated Search
1992-01-01
Congestion on our nation's highways, especially in urban areas, is a serious problem that is growing steadily worse. In Virginia, it is estimated that 28 percent of the daily vehicle miles of travel (VMT) occurring during peak hour traffic is congest...
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-06-30
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
Inferring the background traffic arrival process in the Internet.
Hága, Péter; Csabai, István; Vattay, Gábor
2009-12-01
Phase transition has been found in many complex interactivity systems. Complex networks are not exception either but there are quite few real systems where we can directly understand the emergence of this nontrivial behavior from the microscopic view. In this paper, we present the emergence of the phase transition between the congested and uncongested phases of a network link. We demonstrate a method to infer the background traffic arrival process, which is one of the key state parameters of the Internet traffic. The traffic arrival process in the Internet has been investigated in several studies, since the recognition of its self-similar nature. The statistical properties of the traffic arrival process are very important since they are fundamental in modeling the dynamical behavior. Here, we demonstrate how the widely used packet train technique can be used to determine the main properties of the traffic arrival process. We show that the packet train dispersion is sensitive to the congestion on the network path. We introduce the packet train stretch as an order parameter to describe the phase transition between the congested and uncongested phases of the bottleneck link in the path. We find that the distribution of the background traffic arrival process can be determined from the average packet train dispersion at the critical point of the system.
Zhang, Kai; Batterman, Stuart A
2009-10-15
Traffic congestion increases air pollutant exposures of commuters and urban populations due to the increased time spent in traffic and the increased vehicular emissions that occur in congestion, especially "stop-and-go" traffic. Increased time in traffic also decreases time in other microenvironments, a trade-off that has not been considered in previous time activity pattern (TAP) analyses conducted for exposure assessment purposes. This research investigates changes in time allocations and exposures that result from traffic congestion. Time shifts were derived using data from the National Human Activity Pattern Survey (NHAPS), which was aggregated to nine microenvironments (six indoor locations, two outdoor locations and one transport location). After imputing missing values, handling outliers, and conducting other quality checks, these data were stratified by respondent age, employment status and period (weekday/weekend). Trade-offs or time-shift coefficients between time spent in vehicles and the eight other microenvironments were then estimated using robust regression. For children and retirees, congestion primarily reduced the time spent at home; for older children and working adults, congestion shifted the time spent at home as well as time in schools, public buildings, and other indoor environments. Changes in benzene and PM(2.5) exposure were estimated for the current average travel delay in the U.S. (9 min day(-1)) and other scenarios using the estimated time shifts coefficients, concentrations in key microenvironments derived from the literature, and a probabilistic analysis. Changes in exposures depended on the duration of the congestion and the pollutant. For example, a 30 min day(-1) travel delay was determined to account for 21+/-12% of current exposure to benzene and 14+/-8% of PM(2.5) exposure. The time allocation shifts and the dynamic approach to TAPs improve estimates of exposure impacts from congestion and other recurring events.
Improved routing strategy based on gravitational field theory
NASA Astrophysics Data System (ADS)
Song, Hai-Quan; Guo, Jin
2015-10-01
Routing and path selection are crucial for many communication and logistic applications. We study the interaction between nodes and packets and establish a simple model for describing the attraction of the node to the packet in transmission process by using the gravitational field theory, considering the real and potential congestion of the nodes. On the basis of this model, we propose a gravitational field routing strategy that considers the attractions of all of the nodes on the travel path to the packet. In order to illustrate the efficiency of proposed routing algorithm, we introduce the order parameter to measure the throughput of the network by the critical value of phase transition from a free flow phase to a congested phase, and study the distribution of betweenness centrality and traffic jam. Simulations show that, compared with the shortest path routing strategy, the gravitational field routing strategy considerably enhances the throughput of the network and balances the traffic load, and nearly all of the nodes are used efficiently. Project supported by the Technology and Development Research Project of China Railway Corporation (Grant No. 2012X007-D) and the Key Program of Technology and Development Research Foundation of China Railway Corporation (Grant No. 2012X003-A).
Evaluation of the public health impacts of traffic congestion: a health risk assessment.
Levy, Jonathan I; Buonocore, Jonathan J; von Stackelberg, Katherine
2010-10-27
Traffic congestion is a significant issue in urban areas in the United States and around the world. Previous analyses have estimated the economic costs of congestion, related to fuel and time wasted, but few have quantified the public health impacts or determined how these impacts compare in magnitude to the economic costs. Moreover, the relative magnitudes of economic and public health impacts of congestion would be expected to vary significantly across urban areas, as a function of road infrastructure, population density, and atmospheric conditions influencing pollutant formation, but this variability has not been explored. In this study, we evaluate the public health impacts of ambient exposures to fine particulate matter (PM2.5) concentrations associated with a business-as-usual scenario of predicted traffic congestion. We evaluate 83 individual urban areas using traffic demand models to estimate the degree of congestion in each area from 2000 to 2030. We link traffic volume and speed data with the MOBILE6 model to characterize emissions of PM2.5 and particle precursors attributable to congestion, and we use a source-receptor matrix to evaluate the impact of these emissions on ambient PM2.5 concentrations. Marginal concentration changes are related to a concentration-response function for mortality, with a value of statistical life approach used to monetize the impacts. We estimate that the monetized value of PM2.5-related mortality attributable to congestion in these 83 cities in 2000 was approximately $31 billion (2007 dollars), as compared with a value of time and fuel wasted of $60 billion. In future years, the economic impacts grow (to over $100 billion in 2030) while the public health impacts decrease to $13 billion in 2020 before increasing to $17 billion in 2030, given increasing population and congestion but lower emissions per vehicle. Across cities and years, the public health impacts range from more than an order of magnitude less to in excess of the economic impacts. Our analyses indicate that the public health impacts of congestion may be significant enough in magnitude, at least in some urban areas, to be considered in future evaluations of the benefits of policies to mitigate congestion.
LADS: Optimizing Data Transfers using Layout-Aware Data Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R
While future terabit networks hold the promise of signifi- cantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today s 100 gigabit networks to real- ize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink. Data stor- age infrastructure at both the source and sink and its in- terplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this paper, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network en- vironment, and we presentmore » a new bulk data movement framework called LADS for terabit networks. LADS ex- ploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to use zero-copy, OS-bypass hardware when available. It can further im- prove data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared stor- age resource, improving I/O bandwidth, and data transfer rates across the high speed networks.« less
Reliable multicast protocol specifications flow control and NACK policy
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.
Evaluation of the public health impacts of traffic congestion: a health risk assessment
2010-01-01
Background Traffic congestion is a significant issue in urban areas in the United States and around the world. Previous analyses have estimated the economic costs of congestion, related to fuel and time wasted, but few have quantified the public health impacts or determined how these impacts compare in magnitude to the economic costs. Moreover, the relative magnitudes of economic and public health impacts of congestion would be expected to vary significantly across urban areas, as a function of road infrastructure, population density, and atmospheric conditions influencing pollutant formation, but this variability has not been explored. Methods In this study, we evaluate the public health impacts of ambient exposures to fine particulate matter (PM2.5) concentrations associated with a business-as-usual scenario of predicted traffic congestion. We evaluate 83 individual urban areas using traffic demand models to estimate the degree of congestion in each area from 2000 to 2030. We link traffic volume and speed data with the MOBILE6 model to characterize emissions of PM2.5 and particle precursors attributable to congestion, and we use a source-receptor matrix to evaluate the impact of these emissions on ambient PM2.5 concentrations. Marginal concentration changes are related to a concentration-response function for mortality, with a value of statistical life approach used to monetize the impacts. Results We estimate that the monetized value of PM2.5-related mortality attributable to congestion in these 83 cities in 2000 was approximately $31 billion (2007 dollars), as compared with a value of time and fuel wasted of $60 billion. In future years, the economic impacts grow (to over $100 billion in 2030) while the public health impacts decrease to $13 billion in 2020 before increasing to $17 billion in 2030, given increasing population and congestion but lower emissions per vehicle. Across cities and years, the public health impacts range from more than an order of magnitude less to in excess of the economic impacts. Conclusions Our analyses indicate that the public health impacts of congestion may be significant enough in magnitude, at least in some urban areas, to be considered in future evaluations of the benefits of policies to mitigate congestion. PMID:20979626
Distributed multiple path routing in complex networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Wang, San-Xiu; Wu, Ling-Wei; Mei, Pan; Yang, Xu-Hua; Wen, Guang-Hui
2016-12-01
Routing in complex transmission networks is an important problem that has garnered extensive research interest in the recent years. In this paper, we propose a novel routing strategy called the distributed multiple path (DMP) routing strategy. For each of the O-D node pairs in a given network, the DMP routing strategy computes and stores multiple short-length paths that overlap less with each other in advance. And during the transmission stage, it rapidly selects an actual routing path which provides low transmission cost from the pre-computed paths for each transmission task, according to the real-time network transmission status information. Computer simulation results obtained for the lattice, ER random, and scale-free networks indicate that the strategy can significantly improve the anti-congestion ability of transmission networks, as well as provide favorable routing robustness against partial network failures.
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
Three essays on access pricing
NASA Astrophysics Data System (ADS)
Sydee, Ahmed Nasim
In the first essay, a theoretical model is developed to determine the time path of optimal access price in the telecommunications industry. Determining the optimal access price is an important issue in the economics of telecommunications. Setting a high access price discourages potential entrants; a low access price, on the other hand, amounts to confiscation of private property because the infrastructure already built by the incumbent is sunk. Furthermore, a low access price does not give the incumbent incentives to maintain the current network and to invest in new infrastructures. Much of the existing literature on access pricing suffers either from the limitations of a static framework or from the assumption that all costs are avoidable. The telecommunications industry is subject to high stranded costs and, therefore, to address this issue a dynamic model is imperative. This essay presents a dynamic model of one-way access pricing in which the compensation involved in deregulatory taking is formalized and then analyzed. The short run adjustment after deregulatory taking has occurred is carried out and discussed. The long run equilibrium is also analyzed. A time path for the Ramsey price is shown as the correct dynamic price of access. In the second essay, a theoretical model is developed to determine the time path of optimal access price for an infrastructure that is characterized by congestion and lumpy investment. Much of the theoretical literature on access pricing of infrastructure prescribes that the access price be set at the marginal cost of the infrastructure. In proposing this rule of access pricing, the conventional analysis assumes that infrastructure investments are infinitely divisible so that it makes sense to talk about the marginal cost of investment. Often it is the case that investments in infrastructure are lumpy and can only be made in large chunks, and this renders the marginal cost concept meaningless. In this essay, we formalize a model of access pricing with congestion and in which investments in infrastructure are lumpy. To fix ideas, the model is formulated in the context of airport infrastructure investments, which captures both the element of congestion and the lumpiness involved in infrastructure investments. The optimal investment program suggests how many units of capacity should be installed and at which times. Because time is continuous in the model, the discounted cost -- despite the lumpiness of capacity additions -- can be made to vary continuously by varying the time a capacity addition is made. The main results that emerge from the analysis can be described as follows: First, the global demand for air travel rises with time and experiences an upward jump whenever a capacity addition is made. Second, the access price is constant and stays at the basic level when the system is not congested. When the system is congested, a congestion surcharge is imposed on top of the basic level, and the congestion surcharge rises with the level of congestion until the next capacity addition is made at which time the access price takes a downward jump. Third, the individual demand for air travel is constant before congestion sets in and after the last capacity addition takes place. During a time interval in which congestion rises, the individual demand for travel is below the level that prevails when there is no congestion and declines as congestion worsens. The third essay contains a model of access pricing for natural gas transmission pipelines, both when pipeline operators are regulated and when they behave strategically. The high sunk costs involved in building a pipeline network constitute a serious barrier of entry, and competitive behaviour in the transmission pipeline sector cannot be expected. Most of the economic analyses of access pricing for natural gas transmission pipelines are carried out from the regulatory perspective, and the access price paid by shippers are cost-based. The model formalized is intended to capture some essential characteristics of networks in which components interact with one another when combined into an integrated system. The model shows how the topology of the network determines the access prices in different components of the network. The general results that emerge from the analysis can be summarized as follows. First, the monopoly power of a pipeline operator is reduced by the entry of a new pipeline supply connected in parallel to the same demand node. When the pipelines are connected in series, the one upstream enjoys a first-move advantage over the one downstream, and the toll set by the upstream pipeline operator after entry by the downstream pipeline operator will rise above the original monopoly level. The equilibrium prices of natural gas at the various nodes of the network are also discussed. (Abstract shortened by UMI.)
Gorcsan, J; Snow, F R; Paulsen, W; Nixon, J V
1991-03-01
A completely noninvasive method for estimating left atrial pressure in patients with congestive heart failure and mitral regurgitation has been devised with the use of continuous-wave Doppler echocardiography and brachial sphygmomanometry. Of 46 patients studied with mitral regurgitation, 35 (76%) had jets with distinct Doppler spectral envelopes recorded. The peak ventriculoatrial gradient was obtained by measuring peak mitral regurgitant velocity in systole and using the modified Bernoulli equation. This gradient was then subtracted from peak brachial systolic blood pressure, an estimate of left ventricular systolic pressure, to yield left atrial pressure (left atrial pressure = systolic blood pressure - mitral regurgitant pressure gradient). Noninvasive estimates of left atrial pressure from 35 patients were plotted against simultaneous recordings of mean pulmonary capillary wedge pressure resulting in the correlation y = 0.88x + 3.3, r = 0.88, standard error of estimate = +/- 4 mm Hg (p less than 0.001). Therefore, continuous-wave Doppler echocardiography and sphygmomanometry may be used in selected patients with congestive heart failure and mitral regurgitation for noninvasive estimation of left atrial pressure.
DOT National Transportation Integrated Search
2001-01-01
Internet-based Advanced Traveler Information Services (ATIS) provide the urban traveler with estimated travel times based on current roadway congestion. Survey research indicates that the vast majority of current ATIS users are satisfied consumers wh...
Quantifying incident-induced travel delays on freeways using traffic sensor data
DOT National Transportation Integrated Search
2008-02-01
Traffic congestion is a major operational problem for freeways in Washington State. Recent studies have estimated that more than 50% of freeway congestion is caused by traffic incidents. To help the Washington State Department of Transportation (WSDO...
Quantifying incident-induced travel delays on freeways using traffic sensor data
DOT National Transportation Integrated Search
2008-05-01
Traffic congestion is a major operational problem for freeways in Washington State. Recent studies have estimated that more than 50 percent of freeway congestion is caused by traffic incidents. To help the Washington State Department of Transportatio...
DOT National Transportation Integrated Search
2017-11-01
Transportation agencies face continual pressure to ensure the proper allocation of transportation investments and financial resources. Choosing the right set of congestion mitigation and mobility strategies is critical to ensuring the wise applicatio...
Essays in financial transmission rights pricing
NASA Astrophysics Data System (ADS)
Posner, Barry
This work examines issues in the pricing of financial transmission rights in the PJM market region. The US federal government is advocating the creation of large-scale, not-for-profit regional transmission organizations to increase the efficiency of the transmission of electricity. As a non-profit entity, PJM needs to allocate excess revenues collected as congestion rents, and the participants in the transmission markets need to be able to hedge their exposure to congestion rents. For these purposes, PJM has developed an instrument known as the financial transmission right (FTR). This research, utilizing a new data set assembled by the author, looks at two aspects of the FTR market. The first chapter examines the problem of forecasting congestion in a transmission grid. In the PJM FTR system firms bid in a competitive auction for FTRs that cover a period of one month. The auctions take place in the middle of the previous month; therefore firms have to forecast congestion rents for the period two to six weeks after the auction. The common methods of forecasting congestion are either time-series models or full-information engineering studies. In this research, the author develops a forecasting system that is more economically grounded than a simple time-series model, but requires less information than an engineering model. This method is based upon the arbitrage-cost methodology, whereby congesting is calculated as the difference of two non-observable variables: the transmission price difference that would exist in the total absence of transmission capacity between two nodes, and the ability of the existing transmission to reduced that price difference. If the ability to reduce the price difference is greater than the price difference, then the cost of electricity at each node will be the same, and congestion rent will be zero. If transmission capacity limits are binding on the flow of power, then a price difference persists and congestion rents exist. Three transmission paths in the Delmarva Peninsula were examined. The maximum-likelihood two-way Tobit model developed in Chapter One consistently predicts the expected responses to the independent variables that have employed, but the model as defined here does a poor job of predicting prices. This is likely due to the inability to include system outages (i.e., short-term changes in the structure of the transmission grid) as variables in the estimation model. The second chapter addresses the behavior of firms in the monthly auctions for FTRs. FTRs are a claim to congestion rent revenues along a certain path within the PJM grid, and are awarded in a uniform-price divisible-goods auction. Firms typically submit a schedule of bids for different amounts of FTR at different prices, akin to a demand curve. A firm bidding too high a price may cause the clearing price of the FTR to be higher than the realized value of the FTR, creating a loss from ownership of the FTR. A firm bidding too low means that it wins no FTRs, depriving itself of the ability to profit from ownership or to hedge against congestion. Several questions concerning firm behavior are addressed in this study. It is found that firms adjust their bids in response to new information that is obtained from past auctions: they raise or lower bids in accordance with changes in recent FTR prices and payoffs. Firms consistently bid below the value of the FTR (i.e., shade their bids). This adds empirical evidence to the theoretically-posited notion that uniform-price auctions are not truth-telling, unlike the second-price auction for a non-divisible good. Firms employ greater bid-shading in response to increases in the volatility of both FTR clearing prices and realized FTR values. This validates the notion that firms are risk-averse. It is discovered that better-informed "insider" firms employ structurally different bidding strategies, but these differences do not lead to greater profits. However, profits do increase as firms gain more experience in these markets, lending credence to the notion that firms learn over time and that markets discipline poorly performing firms by either educating them or driving them out of the market. It is also found that firms that employ complicated bidding strategies enjoy greater profitability than firms which employ simple bidding strategies. A surprising corollary finding is that firm strategies do not converge to a common form, but that different firms continue to employ different strategies, and often move away from the seemingly dominant strategy. Firms can enter this market as either long-buyers or short-sellers, and it is discovered that long and short players display structurally divergent bidding strategies. This is perhaps unsurprising, given that long players can be either hedgers or speculators, but short players are overwhelmingly speculators.
Congestion control and routing over satellite networks
NASA Astrophysics Data System (ADS)
Cao, Jinhua
Satellite networks and transmissions find their application in fields of computer communications, telephone communications, television broadcasting, transportation, space situational awareness systems and so on. This thesis mainly focuses on two networking issues affecting satellite networking: network congestion control and network routing optimization. Congestion, which leads to long queueing delays, packet losses or both, is a networking problem that has drawn the attention of many researchers. The goal of congestion control mechanisms is to ensure high bandwidth utilization while avoiding network congestion by regulating the rate at which traffic sources inject packets into a network. In this thesis, we propose a stable congestion controller using data-driven, safe switching control theory to improve the dynamic performance of satellite Transmission Control Protocol/Active Queue Management (TCP/AQM) networks. First, the stable region of the Proportional-Integral (PI) parameters for a nominal model is explored. Then, a PI controller, whose parameters are adaptively tuned by switching among members of a given candidate set, using observed plant data, is presented and compared with some classical AQM policy examples, such as Random Early Detection (RED) and fixed PI control. A new cost detectable switching law with an interval cost function switching algorithm, which improves the performance and also saves the computational cost, is developed and compared with a law commonly used in the switching control literature. Finite-gain stability of the system is proved. A fuzzy logic PI controller is incorporated as a special candidate to achieve good performance at all nominal points with the available set of candidate controllers. Simulations are presented to validate the theory. An effocient routing algorithm plays a key role in optimizing network resources. In this thesis, we briefly analyze Low Earth Orbit (LEO) satellite networks, review the Cross Entropy (CE) method and then develop a novel on-demand routing system named Cross Entropy Accelerated Ant Routing System (CEAARS) for regular constellation LEO satellite networks. By implementing simulations on an Iridium-like satellite network, we compare the proposed CEAARS algorithm with the two approaches to adaptive routing protocols on the Internet: distance-vector (DV) and link-state (LS), as well as with the original Cross Entropy Ant Routing System (CEARS). DV algorithms are based on distributed Bellman Ford algorithm, and LS algorithms are implementation of Dijkstras single source shortest path. The results show that CEAARS not only remarkably improves the convergence speed of achieving optimal or suboptimal paths, but also reduces the number of overhead ants (management packets).
Penry, J F; Upton, J; Mein, G A; Rasmussen, M D; Ohnstad, I; Thompson, P D; Reinemann, D J
2017-01-01
The primary objective of this experiment was to assess the effect of mouthpiece chamber vacuum on teat-end congestion. The secondary objective was to assess the interactive effects of mouthpiece chamber vacuum with teat-end vacuum and pulsation setting on teat-end congestion. The influence of system vacuum, pulsation settings, mouthpiece chamber vacuum, and teat-end vacuum on teat-end congestion were tested in a 2×2 factorial design. The low-risk conditions for teat-end congestion (TEL) were 40 kPa system vacuum (Vs) and 400-ms pulsation b-phase. The high-risk conditions for teat-end congestion (TEH) were 49 kPa Vs and 700-ms b-phase. The low-risk condition for teat-barrel congestion (TBL) was created by venting the liner mouthpiece chamber to atmosphere. In the high-risk condition for teat-barrel congestion (TBH) the mouthpiece chamber was connected to short milk tube vacuum. Eight cows (32 quarters) were used in the experiment conducted during 0400 h milkings. All cows received all treatments over the entire experimental period. Teatcups were removed after 150 s for all treatments to standardize the exposure period. Calculated teat canal cross-sectional area (CA) was used to assess congestion of teat tissue. The main effect of the teat-end treatment was a reduction in CA of 9.9% between TEL and TEH conditions, for both levels of teat-barrel congestion risk. The main effect of the teat-barrel treatment was remarkably similar, with a decrease of 9.7% in CA between TBL and TBH conditions for both levels of teat-end congestion risk. No interaction between treatments was detected, hence the main effects are additive. The most aggressive of the 4 treatment combinations (TEH plus TBH) had a CA estimate 20% smaller than for the most gentle treatment combination (TEL plus TBL). The conditions designed to impair circulation in the teat barrel also had a deleterious effect on circulation at the teat end. This experiment highlights the importance of elevated mouthpiece chamber vacuum on teat-end congestion and resultant decreases in CA. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOT National Transportation Integrated Search
2003-09-01
The Urban Mobility Study Report procedures provide estimates of mobility at the areawide level. The approach that is used describes congestion in consistent ways using generally available data allowing for comparisons across urban areas or groups of ...
The impact of truck repositioning on congestion and pollution in the LA basin.
DOT National Transportation Integrated Search
2011-03-01
Pollution and congestion caused by port related truck traffic is usually estimated based on careful transportation modeling and simulation. In these efforts, however, attention is normally focused on trucks on their way from a terminal at the Los Ang...
Development of an areawide estimate of truck freight value in the urban mobility report.
DOT National Transportation Integrated Search
2012-08-01
Significant efforts have resulted in improved knowledge about the effects of congestion on the motoring public. The Urban : Mobility Report (UMR) has been produced for over 20 years detailing the effects of congestion in the United States (1). Despit...
DOT National Transportation Integrated Search
2015-05-01
This document presents summary and detailed findings from a research effort to develop estimates of the cost-effectiveness of a range of project types funded under the Congestion Mitigation and Air Quality (CMAQ) Improvement Program. In this study, c...
DOT National Transportation Integrated Search
2008-08-01
Freeway congestion is a major problem in many urban areas. It has been estimated that freeway incidents (events that impede the flow of traffic: accidents, disabled vehicles, etc.) account for one-half to three-fourths of the total congestion on metr...
Improved Efficient Routing Strategy on Scale-Free Networks
NASA Astrophysics Data System (ADS)
Jiang, Zhong-Yuan; Liang, Man-Gui
Since the betweenness of nodes in complex networks can theoretically represent the traffic load of nodes under the currently used routing strategy, we propose an improved efficient (IE) routing strategy to enhance to the network traffic capacity based on the betweenness centrality. Any node with the highest betweenness is susceptible to traffic congestion. An efficient way to improve the network traffic capacity is to redistribute the heavy traffic load from these central nodes to non-central nodes, so in this paper, we firstly give a path cost function by considering the sum of node betweenness with a tunable parameter β along the actual path. Then, by minimizing the path cost, our IE routing strategy achieved obvious improvement on the network transport efficiency. Simulations on scale-free Barabási-Albert (BA) networks confirmed the effectiveness of our strategy, when compared with the efficient routing (ER) and the shortest path (SP) routing.
A link-adding strategy for transport efficiency of complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Han, Weizhan; Guo, Qing; Wang, Zhenyong; Zhang, Shuai
2016-12-01
The transport efficiency is one of the critical parameters to evaluate the performance of a network. In this paper, we propose an improved efficient (IE) strategy to enhance the network transport efficiency of complex networks by adding a fraction of links to an existing network based on the node’s local degree centrality and the shortest path length. Simulation results show that the proposed strategy can bring better traffic capacity and shorter average shortest path length than the low-degree-first (LDF) strategy under the shortest path routing protocol. It is found that the proposed strategy is beneficial to the improvement of overall traffic handling and delivering ability of the network. This study can alleviate the congestion in networks, and is helpful to design and optimize realistic networks.
Congestion Prediction Modeling for Quality of Service Improvement in Wireless Sensor Networks
Lee, Ga-Won; Lee, Sung-Young; Huh, Eui-Nam
2014-01-01
Information technology (IT) is pushing ahead with drastic reforms of modern life for improvement of human welfare. Objects constitute “Information Networks” through smart, self-regulated information gathering that also recognizes and controls current information states in Wireless Sensor Networks (WSNs). Information observed from sensor networks in real-time is used to increase quality of life (QoL) in various industries and daily life. One of the key challenges of the WSNs is how to achieve lossless data transmission. Although nowadays sensor nodes have enhanced capacities, it is hard to assure lossless and reliable end-to-end data transmission in WSNs due to the unstable wireless links and low hard ware resources to satisfy high quality of service (QoS) requirements. We propose a node and path traffic prediction model to predict and minimize the congestion. This solution includes prediction of packet generation due to network congestion from both periodic and event data generation. Simulation using NS-2 and Matlab is used to demonstrate the effectiveness of the proposed solution. PMID:24784035
NASA Astrophysics Data System (ADS)
Hitaj, Claudia
In this dissertation, I analyze the drivers of wind power development in the United States as well as the relationship between renewable power plant location and transmission congestion and emissions levels. I first examine the role of government renewable energy incentives and access to the electricity grid on investment in wind power plants across counties from 1998-2007. The results indicate that the federal production tax credit, state-level sales tax credit and production incentives play an important role in promoting wind power. In addition, higher wind power penetration levels can be achieved by bringing more parts of the electricity transmission grid under independent system operator regulation. I conclude that state and federal government policies play a significant role in wind power development both by providing financial support and by improving physical and procedural access to the electricity grid. Second, I examine the effect of renewable power plant location on electricity transmission congestion levels and system-wide emissions levels in a theoretical model and a simulation study. A new renewable plant takes the effect of congestion on its own output into account, but ignores the effect of its marginal contribution to congestion on output from existing plants, which results in curtailment of renewable power. Though pricing congestion removes the externality and reduces curtailment, I find that in the absence of a price on emissions, pricing congestion may in some cases actually increase system-wide emissions. The final part of my dissertation deals with an econometric issue that emerged from the empirical analysis of the drivers of wind power. I study the effect of the degree of censoring on random-effects Tobit estimates in finite sample with a particular focus on severe censoring, when the percentage of uncensored observations reaches 1 to 5 percent. The results show that the Tobit model performs well even at 5 percent uncensored observations with the bias in the Tobit estimates remaining at or below 5 percent. Under severe censoring (1 percent uncensored observations), large biases appear in the estimated standard errors and marginal effects. These are generally reduced as the sample size increases in both N and T.
DOT National Transportation Integrated Search
2014-01-01
Rail lines present two major challenges to the : roadways they intersect: potential for collisions : and increased congestion. In addition, congestion : can contribute collision hazards when drivers are : impatient or vehicles are prevented from clea...
How Travel Demand Affects Detection of Non-Recurrent Traffic Congestion on Urban Road Networks
NASA Astrophysics Data System (ADS)
Anbaroglu, B.; Heydecker, B.; Cheng, T.
2016-06-01
Occurrence of non-recurrent traffic congestion hinders the economic activity of a city, as travellers could miss appointments or be late for work or important meetings. Similarly, for shippers, unexpected delays may disrupt just-in-time delivery and manufacturing processes, which could lose them payment. Consequently, research on non-recurrent congestion detection on urban road networks has recently gained attention. By analysing large amounts of traffic data collected on a daily basis, traffic operation centres can improve their methods to detect non-recurrent congestion rapidly and then revise their existing plans to mitigate its effects. Space-time clusters of high link journey time estimates correspond to non-recurrent congestion events. Existing research, however, has not considered the effect of travel demand on the effectiveness of non-recurrent congestion detection methods. Therefore, this paper investigates how travel demand affects detection of non-recurrent traffic congestion detection on urban road networks. Travel demand has been classified into three categories as low, normal and high. The experiments are carried out on London's urban road network, and the results demonstrate the necessity to adjust the relative importance of the component evaluation criteria depending on the travel demand level.
Air pollution and health risks due to vehicle traffic.
Zhang, Kai; Batterman, Stuart
2013-04-15
Traffic congestion increases vehicle emissions and degrades ambient air quality, and recent studies have shown excess morbidity and mortality for drivers, commuters and individuals living near major roadways. Presently, our understanding of the air pollution impacts from congestion on roads is very limited. This study demonstrates an approach to characterize risks of traffic for on- and near-road populations. Simulation modeling was used to estimate on- and near-road NO2 concentrations and health risks for freeway and arterial scenarios attributable to traffic for different traffic volumes during rush hour periods. The modeling used emission factors from two different models (Comprehensive Modal Emissions Model and Motor Vehicle Emissions Factor Model version 6.2), an empirical traffic speed-volume relationship, the California Line Source Dispersion Model, an empirical NO2-NOx relationship, estimated travel time changes during congestion, and concentration-response relationships from the literature, which give emergency doctor visits, hospital admissions and mortality attributed to NO2 exposure. An incremental analysis, which expresses the change in health risks for small increases in traffic volume, showed non-linear effects. For a freeway, "U" shaped trends of incremental risks were predicted for on-road populations, and incremental risks are flat at low traffic volumes for near-road populations. For an arterial road, incremental risks increased sharply for both on- and near-road populations as traffic increased. These patterns result from changes in emission factors, the NO2-NOx relationship, the travel delay for the on-road population, and the extended duration of rush hour for the near-road population. This study suggests that health risks from congestion are potentially significant, and that additional traffic can significantly increase risks, depending on the type of road and other factors. Further, evaluations of risk associated with congestion must consider travel time, the duration of rush-hour, congestion-specific emission estimates, and uncertainties. Copyright © 2013 Elsevier B.V. All rights reserved.
Air pollution and health risks due to vehicle traffic
Zhang, Kai; Batterman, Stuart
2014-01-01
Traffic congestion increases vehicle emissions and degrades ambient air quality, and recent studies have shown excess morbidity and mortality for drivers, commuters and individuals living near major roadways. Presently, our understanding of the air pollution impacts from congestion on roads is very limited. This study demonstrates an approach to characterize risks of traffic for on- and near-road populations. Simulation modeling was used to estimate on- and near-road NO2 concentrations and health risks for freeway and arterial scenarios attributable to traffic for different traffic volumes during rush hour periods. The modeling used emission factors from two different models (Comprehensive Modal Emissions Model and Motor Vehicle Emissions Factor Model version 6.2), an empirical traffic speed–volume relationship, the California Line Source Dispersion Model, an empirical NO2–NOx relationship, estimated travel time changes during congestion, and concentration–response relationships from the literature, which give emergency doctor visits, hospital admissions and mortality attributed to NO2 exposure. An incremental analysis, which expresses the change in health risks for small increases in traffic volume, showed non-linear effects. For a freeway, “U” shaped trends of incremental risks were predicted for on-road populations, and incremental risks are flat at low traffic volumes for near-road populations. For an arterial road, incremental risks increased sharply for both on- and near-road populations as traffic increased. These patterns result from changes in emission factors, the NO2–NOx relationship, the travel delay for the on-road population, and the extended duration of rush hour for the near-road population. This study suggests that health risks from congestion are potentially significant, and that additional traffic can significantly increase risks, depending on the type of road and other factors. Further, evaluations of risk associated with congestion must consider travel time, the duration of rush-hour, congestion-specific emission estimates, and uncertainties. PMID:23500830
Predictability of Top of Descent Location for Operational Idle-Thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel L.
2010-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its uncertainty models, commercial flights executed idle-thrust descents at a specified descent speed, and the recorded data included the specified descent speed profile, aircraft weight, and the winds entered into the FMS as well as the radar data. The FMS computed the intended descent path assuming idle thrust after top of descent (TOD), and the controllers and pilots then endeavored to allow the FMS to fly the descent to the meter fix with minimal human intervention. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location were extracted from the radar data. Using approximately 70 descents each in Boeing 757 and Airbus 319/320 aircraft, multiple regression estimated TOD location as a linear function of the available predictive factors. The cruise and meter fix altitudes, descent speed, and wind clearly improve goodness of fit. The aircraft weight improves fit for the Airbus descents but not for the B757. Except for a few statistical outliers, the residuals have absolute value less than 5 nmi. Thus, these predictive factors adequately explain the TOD location, which indicates the data do not include excessive noise.
A Bayesian ridge regression analysis of congestion's impact on urban expressway safety.
Shi, Qi; Abdel-Aty, Mohamed; Lee, Jaeyoung
2016-03-01
With the rapid growth of traffic in urban areas, concerns about congestion and traffic safety have been heightened. This study leveraged both Automatic Vehicle Identification (AVI) system and Microwave Vehicle Detection System (MVDS) installed on an expressway in Central Florida to explore how congestion impacts the crash occurrence in urban areas. Multiple congestion measures from the two systems were developed. To ensure more precise estimates of the congestion's effects, the traffic data were aggregated into peak and non-peak hours. Multicollinearity among traffic parameters was examined. The results showed the presence of multicollinearity especially during peak hours. As a response, ridge regression was introduced to cope with this issue. Poisson models with uncorrelated random effects, correlated random effects, and both correlated random effects and random parameters were constructed within the Bayesian framework. It was proven that correlated random effects could significantly enhance model performance. The random parameters model has similar goodness-of-fit compared with the model with only correlated random effects. However, by accounting for the unobserved heterogeneity, more variables were found to be significantly related to crash frequency. The models indicated that congestion increased crash frequency during peak hours while during non-peak hours it was not a major crash contributing factor. Using the random parameter model, the three congestion measures were compared. It was found that all congestion indicators had similar effects while Congestion Index (CI) derived from MVDS data was a better congestion indicator for safety analysis. Also, analyses showed that the segments with higher congestion intensity could not only increase property damage only (PDO) crashes, but also more severe crashes. In addition, the issues regarding the necessity to incorporate specific congestion indicator for congestion's effects on safety and to take care of the multicollinearity between explanatory variables were also discussed. By including a specific congestion indicator, the model performance significantly improved. When comparing models with and without ridge regression, the magnitude of the coefficients was altered in the existence of multicollinearity. These conclusions suggest that the use of appropriate congestion measure and consideration of multicolilnearity among the variables would improve the models and our understanding about the effects of congestion on traffic safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Clinical examination and the Valsalva maneuver in heart failure].
Liniado, Guillermo E; Beck, Martín A; Gimeno, Graciela M; González, Ana L; Cianciulli, Tomás F; Castiello, Gustavo G; Gagliardi, Juan A
2018-01-01
Congestion in heart failure patients with reduced ejection fraction (HFrEF) is relevant and closely linked to the clinical course. Bedside blood pressure measurement during the Valsalva maneuver (Val) added to clinical examination may improve the assessment of congestion when compared to NT-proBNP levels and left atrial pressure (LAP) estimation by Doppler echocardiography, as surrogate markers of congestion in HFrEF. A clinical examination, LAP and blood tests were performed in 69 HFrEF ambulatory patients with left ventricular ejection fraction ≤ 40% and sinus rhythm. Framingham Heart Failure Score (HFS) was used to evaluate clinical congestion; Val was classified as normal or abnormal, NT-proBNP was classified as low (< 1000 pg/ml) or high (≥ 1000 pg/ml) and the ratio between Doppler early mitral inflow and tissue diastolic velocity was used to estimate LAP and was classified as low (E/e'< 15) or high (E/e' ≥ 15). A total of 69 patients with HFrEF were included; 27 had a HFS ≥ 2 and 13 of them had high NT-proBNP. HFS ≥ 2 had a 62% sensitivity, 70% specificity and a positive likelihood ratio of 2.08 (p=0.01) to detect congestion. When Val was added to clinical examination, the presence of a HFS ≥ 2 and abnormal Val showed a 100% sensitivity, 64% specificity and a positive likelihood ratio of 2.8 (p = 0.0004). Compared with LAP, the presence of HFS = 2 and abnormal Val had 86% sensitivity, 54% specificity and a positive likelihood ratio of 1.86 (p = 0.03). In conclusion, an integrated clinical examination with the addition Valsalva maneuver may improve the assessment of congestion in patients with HFrEF.
Routing optimization in networks based on traffic gravitational field model
NASA Astrophysics Data System (ADS)
Liu, Longgeng; Luo, Guangchun
2017-04-01
For research on the gravitational field routing mechanism on complex networks, we further analyze the gravitational effect of paths. In this study, we introduce the concept of path confidence degree to evaluate the unblocked reliability of paths that it takes the traffic state of all nodes on the path into account from the overall. On the basis of this, we propose an improved gravitational field routing protocol considering all the nodes’ gravities on the path and the path confidence degree. In order to evaluate the transmission performance of the routing strategy, an order parameter is introduced to measure the network throughput by the critical value of phase transition from a free-flow phase to a jammed phase, and the betweenness centrality is used to evaluate the transmission performance and traffic congestion of the network. Simulation results show that compared with the shortest-path routing strategy and the previous gravitational field routing strategy, the proposed algorithm improves the network throughput considerably and effectively balances the traffic load within the network, and all nodes in the network are utilized high efficiently. As long as γ ≥ α, the transmission performance can reach the maximum and remains unchanged for different α and γ, which ensures that the proposed routing protocol is high efficient and stable.
Chapman, Ralph; Keall, Michael; Howden-Chapman, Philippa; Grams, Mark; Witten, Karen; Randal, Edward; Woodward, Alistair
2018-05-11
Active travel (walking and cycling) is beneficial for people’s health and has many co-benefits, such as reducing motor vehicle congestion and pollution in urban areas. There have been few robust evaluations of active travel, and very few studies have valued health and emissions outcomes. The ACTIVE before-and-after quasi-experimental study estimated the net benefits of health and other outcomes from New Zealand’s Model Communities Programme using an empirical analysis comparing two intervention cities with two control cities. The Programme funded investment in cycle paths, other walking and cycling facilities, cycle parking, ‘shared spaces’, media campaigns and events, such as ‘Share the Road’, and cycle-skills training. Using the modified Integrated Transport and Health Impacts Model, the Programme’s net economic benefits were estimated from the changes in use of active travel modes. Annual benefits for health in the intervention cities were estimated at 34.4 disability-adjusted life years (DALYs) and two lives saved due to reductions in cardiac disease, diabetes, cancer, and respiratory disease. Reductions in transport-related carbon emissions were also estimated and valued. Using a discount rate of 3.5%, the estimated benefit/cost ratio was 11:1 and was robust to sensitivity testing. It is concluded that when concerted investment is made in active travel in a city, there is likely to be a measurable, positive return on investment.
Grams, Mark; Witten, Karen; Woodward, Alistair
2018-01-01
Active travel (walking and cycling) is beneficial for people’s health and has many co-benefits, such as reducing motor vehicle congestion and pollution in urban areas. There have been few robust evaluations of active travel, and very few studies have valued health and emissions outcomes. The ACTIVE before-and-after quasi-experimental study estimated the net benefits of health and other outcomes from New Zealand’s Model Communities Programme using an empirical analysis comparing two intervention cities with two control cities. The Programme funded investment in cycle paths, other walking and cycling facilities, cycle parking, ‘shared spaces’, media campaigns and events, such as ‘Share the Road’, and cycle-skills training. Using the modified Integrated Transport and Health Impacts Model, the Programme’s net economic benefits were estimated from the changes in use of active travel modes. Annual benefits for health in the intervention cities were estimated at 34.4 disability-adjusted life years (DALYs) and two lives saved due to reductions in cardiac disease, diabetes, cancer, and respiratory disease. Reductions in transport-related carbon emissions were also estimated and valued. Using a discount rate of 3.5%, the estimated benefit/cost ratio was 11:1 and was robust to sensitivity testing. It is concluded that when concerted investment is made in active travel in a city, there is likely to be a measurable, positive return on investment. PMID:29751618
UAS Conflict-Avoidance Using Multiagent RL with Abstract Strategy Type Communication
NASA Technical Reports Server (NTRS)
Rebhuhn, Carrie; Knudson, Matt; Tumer, Kagan
2014-01-01
The use of unmanned aerial systems (UAS) in the national airspace is of growing interest to the research community. Safety and scalability of control algorithms are key to the successful integration of autonomous system into a human-populated airspace. In order to ensure safety while still maintaining efficient paths of travel, these algorithms must also accommodate heterogeneity of path strategies of its neighbors. We show that, using multiagent RL, we can improve the speed with which conflicts are resolved in cases with up to 80 aircraft within a section of the airspace. In addition, we show that the introduction of abstract agent strategy types to partition the state space is helpful in resolving conflicts, particularly in high congestion.
Announced Strategy Types in Multiagent RL for Conflict-Avoidance in the National Airspace
NASA Technical Reports Server (NTRS)
Rebhuhn, Carrie; Knudson, Matthew D.; Tumer, Kagan
2014-01-01
The use of unmanned aerial systems (UAS) in the national airspace is of growing interest to the research community. Safety and scalability of control algorithms are key to the successful integration of autonomous system into a human-populated airspace. In order to ensure safety while still maintaining efficient paths of travel, these algorithms must also accommodate heterogeneity of path strategies of its neighbors. We show that, using multiagent RL, we can improve the speed with which conflicts are resolved in cases with up to 80 aircraft within a section of the airspace. In addition, we show that the introduction of abstract agent strategy types to partition the state space is helpful in resolving conflicts, particularly in high congestion.
Travel time estimation using Bluetooth.
DOT National Transportation Integrated Search
2015-06-01
The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...
Dynamic travel time estimation using regression trees.
DOT National Transportation Integrated Search
2008-10-01
This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...
Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure
Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.; ...
2016-04-05
While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less
Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.
While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Models for IP/MPLS routing performance: convergence, fast reroute, and QoS impact
NASA Astrophysics Data System (ADS)
Choudhury, Gagan L.
2004-09-01
We show how to model the black-holing and looping of traffic during an Interior Gateway Protocol (IGP) convergence event at an IP network and how to significantly improve both the convergence time and packet loss duration through IGP parameter tuning and algorithmic improvement. We also explore some congestion avoidance and congestion control algorithms that can significantly improve stability of networks in the face of occasional massive control message storms. Specifically we show the positive impacts of prioritizing Hello and Acknowledgement packets and slowing down LSA generation and retransmission generation on detecting congestion in the network. For some types of video, voice signaling and circuit emulation applications it is necessary to reduce traffic loss durations following a convergence event to below 100 ms and we explore that using Fast Reroute algorithms based on Multiprotocol Label Switching Traffic Engineering (MPLS-TE) that effectively bypasses IGP convergence. We explore the scalability of primary and backup MPLS-TE tunnels where MPLS-TE domain is in the backbone-only or edge-to-edge. We also show how much extra backbone resource is needed to support Fast Reroute and how can that be reduced by taking advantage of Constrained Shortest Path (CSPF) routing of MPLS-TE and by reserving less than 100% of primary tunnel bandwidth during Fast Reroute.
A theoretical framework to predict the most likely ion path in particle imaging.
Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2017-03-07
In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.
Hyperswitch Network For Hypercube Computer
NASA Technical Reports Server (NTRS)
Chow, Edward; Madan, Herbert; Peterson, John
1989-01-01
Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.
An optimal routing strategy on scale-free networks
NASA Astrophysics Data System (ADS)
Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Qi, Zhaohui; Zhao, Yongbin
Traffic is one of the most fundamental dynamical processes in networked systems. With the traditional shortest path routing (SPR) protocol, traffic congestion is likely to occur on the hub nodes on scale-free networks. In this paper, we propose an improved optimal routing (IOR) strategy which is based on the betweenness centrality and the degree centrality of nodes in the scale-free networks. With the proposed strategy, the routing paths can accurately bypass hub nodes in the network to enhance the transport efficiency. Simulation results show that the traffic capacity as well as some other indexes reflecting transportation efficiency are further improved with the IOR strategy. Owing to the significantly improved traffic performance, this study is helpful to design more efficient routing strategies in communication or transportation systems.
Selective epidemic vaccination under the performant routing algorithms
NASA Astrophysics Data System (ADS)
Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.
2018-04-01
Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.
DOT National Transportation Integrated Search
2012-12-01
Estimates of value of time (VOT) and value of travel time savings (VTTS) are critical elements in benefitcost : analyses of transportation projects and in developing congestion pricing policies. In addition, : differences in VTTS among various modes ...
Son, Sanghyun; Baek, Yunju
2015-01-01
As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%. PMID:26295230
Son, Sanghyun; Baek, Yunju
2015-08-18
As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.
DOTD support for UTC project : travel time estimation using bluetooth, [research project capsule].
DOT National Transportation Integrated Search
2013-10-01
Travel time estimates are useful tools for measuring congestion in an urban area. Current : practice involves using probe vehicles or video cameras to measure travel time, but this is a laborintensive and expensive means of obtaining the information....
Back pressure based multicast scheduling for fair bandwidth allocation.
Sarkar, Saswati; Tassiulas, Leandros
2005-09-01
We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.
On the Performance of TCP Spoofing in Satellite Networks
NASA Technical Reports Server (NTRS)
Ishac, Joseph; Allman, Mark
2001-01-01
In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.
NASA Astrophysics Data System (ADS)
Katsura, Yasufumi; Attaviriyanupap, Pathom; Kataoka, Yoshihiko
In this research, the fundamental premises for deregulation of the electric power industry are reevaluated. The authors develop a simple model to represent wholesale electricity market with highly congested network. The model is developed by simplifying the power system and market in New York ISO based on available data of New York ISO in 2004 with some estimation. Based on the developed model and construction cost data from the past, the economic impact of transmission line addition on market participants and the impact of deregulation on power plant additions under market with transmission congestion are studied. Simulation results show that the market signals may fail to facilitate proper capacity additions and results in the undesirable over-construction and insufficient-construction cycle of capacity addition.
Exposure assessment of a cyclist to particles and chemical elements.
Ramos, C A; Silva, J R; Faria, T; Wolterbeek, T H; Almeida, S M
2017-05-01
Cycle paths can be used as a route for active transportation or simply to cycle for physical activity and leisure. However, exposure to air pollutants can be boosted while cycling, in urban environments, due to the proximity to vehicular emissions and elevated breathing rates. The objective of this work was to assess the exposure of a cyclist to particles and to chemical elements by combining real-time aerosol mass concentration reading equipment and biomonitoring techniques. PM 10 and PM 2.5 were measured on three cycle paths located in Lisbon, during weekdays and weekends and during rush hours and off-peak hours resulting in a total of 60 campaigns. Lichens were exposed along cycle paths for 3 months, and their element contents were measured by instrumental neutron activation analysis using the k 0 methodology (k 0 -INAA). Using a bicycle commute route of lower traffic intensity and avoiding rush hours or other times with elevated vehicular congestion facilitate a reduction in exposure to pollutants. The implementation of cycle paths in cities is important to stimulate physical activity and active transportation; however, it is essential to consider ambient air and pollutant sources to create safer infrastructures.
Survival Analysis of Patients with End Stage Renal Disease
NASA Astrophysics Data System (ADS)
Urrutia, J. D.; Gayo, W. S.; Bautista, L. A.; Baccay, E. B.
2015-06-01
This paper provides a survival analysis of End Stage Renal Disease (ESRD) under Kaplan-Meier Estimates and Weibull Distribution. The data were obtained from the records of V. L. MakabaliMemorial Hospital with respect to time t (patient's age), covariates such as developed secondary disease (Pulmonary Congestion and Cardiovascular Disease), gender, and the event of interest: the death of ESRD patients. Survival and hazard rates were estimated using NCSS for Weibull Distribution and SPSS for Kaplan-Meier Estimates. These lead to the same conclusion that hazard rate increases and survival rate decreases of ESRD patient diagnosed with Pulmonary Congestion, Cardiovascular Disease and both diseases with respect to time. It also shows that female patients have a greater risk of death compared to males. The probability risk was given the equation R = 1 — e-H(t) where e-H(t) is the survival function, H(t) the cumulative hazard function which was created using Cox-Regression.
Spatial Analysis of Traffic and Routing Path Methods for Tsunami Evacuation
NASA Astrophysics Data System (ADS)
Fakhrurrozi, A.; Sari, A. M.
2018-02-01
Tsunami disaster occurred relatively very fast. Thus, it has a very large-scale impact on both non-material and material aspects. Community evacuation caused mass panic, crowds, and traffic congestion. A further research in spatial based modelling, traffic engineering and splitting zone evacuation simulation is very crucial as an effort to reduce higher losses. This topic covers some information from the previous research. Complex parameters include route selection, destination selection, the spontaneous timing of both the departure of the source and the arrival time to destination and other aspects of the result parameter in various methods. The simulation process and its results, traffic modelling, and routing analysis emphasized discussion which is the closest to real conditions in the tsunami evacuation process. The method that we should highlight is Clearance Time Estimate based on Location Priority in which the computation result is superior to others despite many drawbacks. The study is expected to have input to improve and invent a new method that will be a part of decision support systems for disaster risk reduction of tsunamis disaster.
The Physics of Traffic Congestion and Road Pricing in Transportation Planning
NASA Astrophysics Data System (ADS)
Levinson, David
2010-03-01
This presentation develops congestion theory and congestion pricing theory from its micro- foundations, the interaction of two or more vehicles. Using game theory, with a two- player game it is shown that the emergence of congestion depends on the players' relative valuations of early arrival, late arrival, and journey delay. Congestion pricing can be used as a cooperation mechanism to minimize total costs (if returned to the players). The analysis is then extended to the case of the three- player game, which illustrates congestion as a negative externality imposed on players who do not themselves contribute to it. A multi-agent model of travelers competing to utilize a roadway in time and space is presented. To realize the spillover effect among travelers, N-player games are constructed in which the strategy set includes N+1 strategies. We solve the N-player game (for N = 7) and find Nash equilibria if they exist. This model is compared to the bottleneck model. The results of numerical simulation show that the two models yield identical results in terms of lowest total costs and marginal costs when a social optimum exists. Moving from temporal dynamics to spatial complexity, using consistent agent- based techniques, we model the decision-making processes of users and infrastructure owner/operators to explore the welfare consequence of price competition, capacity choice, and product differentiation on congested transportation networks. Component models include: (1) An agent-based travel demand model wherein each traveler has learning capabilities and unique characteristics (e.g. value of time); (2) Econometric facility provision cost models; and (3) Representations of road authorities making pricing and capacity decisions. Different from small-network equilibrium models in prior literature, this agent- based model is applicable to pricing and investment analyses on large complex networks. The subsequent economic analysis focuses on the source, evolution, measurement, and impact of product differentiation with heterogeneous users on a mixed ownership network (with tolled and untolled roads). Two types of product differentiation in the presence of toll roads, path differentiation and space differentiation, are defined and measured for a base case and several variants with different types of price and capacity competition and with various degrees of user heterogeneity. The findings favor a fixed-rate road pricing policy compared to complete pricing freedom on toll roads. It is also shown that the relationship between net social benefit and user heterogeneity is not monotonic on a complex network with toll roads.
Familoni, O B; Olunuga, T O; Olufemi, B W
2007-01-01
Advanced heart failure (AHF) accounts for about 25% of all cases of heart failure in Nigeria and is associated with a high mortality rate. To undertake a clinical study of the pattern and outcome of AHF in our hospitalised patients and to determine the parameters associated with mortality and survival in these patients. Eighty-two patients with AHF were studied between January 2003 and December 2005. Baseline blood chemistry and haemodynamics were determined. A congestion score, including orthopnoea, elevated jugular venous pressure, oedema, ascites and loud P2, was derived as well as a low perfusion score. Mortality was computed and risk estimated using the Pearson coefficient and log-ranking test. Cox regression analysis was used to identify the predictors of survival. AHF accounted for 43.6% of all hospitalised heart failure patients, with a total mortality of 67.1%. Hypertension was the commonest cause of AHF. The parameters associated with increased mortality rates included age (r = 0.671; p = 0.02), presence of atrial fibrillation (r = 0.532; p = 0.045) and estimated glomerular filtration rate (r = -0.486, p = 0.04). The majority of patients (54.8%) were in the 'wet and cold' congestion category. The congestion score correlated with mortality. The indices of survival included lower age, lower systolic blood pressure, being literate and lower congestion score. AHF was common in our cohorts of hospitalised heart failure patients and it was associated with a high mortality rate.
Costs of Chronic Diseases at the State Level: The Chronic Disease Cost Calculator
Murphy, Louise B.; Khavjou, Olga A.; Li, Rui; Maylahn, Christopher M.; Tangka, Florence K.; Nurmagambetov, Tursynbek A.; Ekwueme, Donatus U.; Nwaise, Isaac; Chapman, Daniel P.; Orenstein, Diane
2015-01-01
Introduction Many studies have estimated national chronic disease costs, but state-level estimates are limited. The Centers for Disease Control and Prevention developed the Chronic Disease Cost Calculator (CDCC), which estimates state-level costs for arthritis, asthma, cancer, congestive heart failure, coronary heart disease, hypertension, stroke, other heart diseases, depression, and diabetes. Methods Using publicly available and restricted secondary data from multiple national data sets from 2004 through 2008, disease-attributable annual per-person medical and absenteeism costs were estimated. Total state medical and absenteeism costs were derived by multiplying per person costs from regressions by the number of people in the state treated for each disease. Medical costs were estimated for all payers and separately for Medicaid, Medicare, and private insurers. Projected medical costs for all payers (2010 through 2020) were calculated using medical costs and projected state population counts. Results Median state-specific medical costs ranged from $410 million (asthma) to $1.8 billion (diabetes); median absenteeism costs ranged from $5 million (congestive heart failure) to $217 million (arthritis). Conclusion CDCC provides methodologically rigorous chronic disease cost estimates. These estimates highlight possible areas of cost savings achievable through targeted prevention efforts or research into new interventions and treatments. PMID:26334712
Arques, Stephane; Roux, Emmanuel; Sbragia, Pascal; Pieri, Bertrand; Gelisse, Richard; Ambrosi, Pierre; Luccioni, Roger
2006-09-01
Based on the hypothesis that it reflects left ventricular (LV) diastolic pressures, B-type natriuretic peptide (BNP) is largely utilized as first-line diagnostic complement in the emergency diagnosis of congestive heart failure (HF). The incremental diagnostic value of tissue Doppler echocardiography, a reliable noninvasive estimate of LV filling pressures, has been reported in patients with preserved LV ejection fraction and discrepancy between BNP levels and the clinical judgment, however, its clinical validity in such patients in the presence of BNP concentrations in the midrange, which may reflect intermediate, nondiagnostic levels of LV filling pressures, is unknown. 34 patients without history of HF, presenting with acute dyspnea at rest, BNP levels of 100-400 pg/ml and normal LV ejection fraction were prospectively enrolled (17 with congestive HF and 17 with noncardiac cause). Tissue Doppler echocardiography was performed within 3 hours after admission. Unlike BNP (P = 0.78), Boston criteria (P = 0.0129), radiographic pulmonary edema (P = 0.0036) and average E/Ea ratio (P = 0.0032) were predictive of congestive HF by logistic regression analysis. In this clinical setting, radiographic pulmonary edema had a positive predictive value of 80% in the diagnosis of congestive HF. In patients without evidence of radiographic pulmonary edema, average E/Ea > 10 was a powerful predictor of congestive HF (area under the ROC curve of 0.886, P < 0.001, sensitivity 100% and specificity 78.6%). By better reflecting LV filling pressures, bedside tissue Doppler echocardiography accurately differentiates congestive HF from noncardiac cause in dyspneic patients with intermediate, nondiagnostic BNP levels and normal LV ejection fraction.
A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA
NASA Astrophysics Data System (ADS)
Khodabakhshi, Mohammad
2009-08-01
This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.
Chand, Sai; Dixit, Vinayak V
2018-03-01
The repercussions from congestion and accidents on major highways can have significant negative impacts on the economy and environment. It is a primary objective of transport authorities to minimize the likelihood of these phenomena taking place, to improve safety and overall network performance. In this study, we use the Hurst Exponent metric from Fractal Theory, as a congestion indicator for crash-rate modeling. We analyze one month of traffic speed data at several monitor sites along the M4 motorway in Sydney, Australia and assess congestion patterns with the Hurst Exponent of speed (H speed ). Random Parameters and Latent Class Tobit models were estimated, to examine the effect of congestion on historical crash rates, while accounting for unobserved heterogeneity. Using a latent class modeling approach, the motorway sections were probabilistically classified into two segments, based on the presence of entry and exit ramps. This will allow transportation agencies to implement appropriate safety/traffic countermeasures when addressing accident hotspots or inadequately managed sections of motorway. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cole, Justin; Beare, Richard; Phan, Thanh G; Srikanth, Velandai; MacIsaac, Andrew; Tan, Christianne; Tong, David; Yee, Susan; Ho, Jesslyn; Layland, Jamie
2017-01-01
Recent evidence suggests hospitals fail to meet guideline specified time to percutaneous coronary intervention (PCI) for a proportion of ST elevation myocardial infarction (STEMI) presentations. Implicit in achieving this time is the rapid assembly of crucial catheter laboratory staff. As a proof-of-concept, we set out to create regional maps that graphically show the impact of traffic congestion and distance to destination on staff recall travel times for STEMI, thereby producing a resource that could be used by staff to improve reperfusion time for STEMI. Travel times for staff recalled to one inner and one outer metropolitan hospital at midnight, 6 p.m., and 7 a.m. were estimated using Google Maps Application Programming Interface. Computer modeling predictions were overlaid on metropolitan maps showing color coded staff recall travel times for STEMI, occurring within non-peak and peak hour traffic congestion times. Inner metropolitan hospital staff recall travel times were more affected by traffic congestion compared with outer metropolitan times, and the latter was more affected by distance. The estimated mean travel times to hospital during peak hour were greater than midnight travel times by 13.4 min to the inner and 6.0 min to the outer metropolitan hospital at 6 p.m. ( p < 0.001). At 7 a.m., the mean difference was 9.5 min to the inner and 3.6 min to the outer metropolitan hospital ( p < 0.001). Only 45% of inner metropolitan staff were predicted to arrive within 30 min at 6 p.m. compared with 100% at midnight ( p < 0.001), and 56% of outer metropolitan staff at 6 p.m. ( p = 0.021). Our results show that integration of map software with traffic congestion data, distance to destination and travel time can predict optimal residence of staff when on-call for PCI.
NASA Astrophysics Data System (ADS)
Alaigba, D. B.; Soumah, M.; Banjo, M. O.
2017-05-01
The problem of urban mobility is complicated by traffic delay, resulting from poor planning, high population density and poor condition of roads within urban spaces. This study assessed traffic congestion resulting from differential contribution made by various land-uses along Apapa-Oworoshoki expressway in Lagos metropolis. The data for this study was from both primary and secondary sources; GPS point data was collected at selected points for traffic volume count; observation of the nature of vehicular traffic congestion, and land use types along the corridor. Existing data on traffic count along the corridor, connectivity map and land use map sourced from relevant authorities were acquired. Traffic congestion within the area was estimated using volume capacity ratio (V/C). Heterogeneity Index was developed and used to quantify the percentage contribution to traffic volume from various land-use categories. Analytical Hierarchical Processing (AHP) and knowledge-based weighting were used to rank the importance of different heterogeneity indices. Results showed significant relationship between the degree of heterogeneity of the land use pattern and road traffic congestion. Volume Capacity Ratio computed revealed that the route corridor exceeds its designed capacity in the southward direction between the hours of 8am and 12pm on working days. Five major nodes were analyzed along the corridor, and were all above the expected Passenger Car Unit (PCU), these are "Oshodi" 15 %, "Airport junction" 10 %, "Cele bus stop" 21 %, "Mile 2" 14 %, "Berger" 15 % and "Tincan bus stop" 33 % indicating heavy traffic congestion.
Chest ultrasound and hidden lung congestion in peritoneal dialysis patients.
Panuccio, Vincenzo; Enia, Giuseppe; Tripepi, Rocco; Torino, Claudia; Garozzo, Maurizio; Battaglia, Giovanni Giorgio; Marcantoni, Carmelita; Infantone, Lorena; Giordano, Guido; De Giorgi, Maria Loreta; Lupia, Mario; Bruzzese, Vincenzo; Zoccali, Carmine
2012-09-01
Chest ultrasound (US) is a non-invasive well-validated technique for estimating extravascular lung water (LW) in patients with heart diseases and in end-stage renal disease. We systematically applied this technique to the whole peritoneal dialysis (PD) population of five dialysis units. We studied the cross-sectional association between LW, echocardiographic parameters, clinical [pedal oedema, New York Heart Association (NYHA) class] and bioelectrical impedance analysis (BIA) markers of volume status in 88 PD patients. Moderate to severe lung congestion was evident in 41 (46%) patients. Ejection fraction was the echocardiographic parameter with the strongest independent association with LW (r = -0.40 P = 0.002). Oedema did not associate with LW on univariate and multivariate analysis. NYHA class was slightly associated with LW (r = 0.21 P = 0.05). Among patients with severe lung congestion, only 27% had pedal oedema and the majority (57%) had no dyspnoea (NYHA Class I). Similarly, the prevalence of patients with BIA, evidence of volume excess was small (11%) and not significantly different (P = 0.79) from that observed in patients with mild or no congestion (9%). In PD patients, LW by chest US reveals moderate to severe lung congestion in a significant proportion of asymptomatic patients. Intervention studies are necessary to prove the usefulness of chest US for optimizing the control of fluid excess in PD patients.
Feasibility of lane closures using probe data.
DOT National Transportation Integrated Search
2017-04-01
To develop an adequate traffic operations management and congestion mitigation plan for every roadway : maintenance and construction project requiring lane closures, transportation agencies need accurate and : reliable estimates of traffic impacts as...
Chain-Based Communication in Cylindrical Underwater Wireless Sensor Networks
Javaid, Nadeem; Jafri, Mohsin Raza; Khan, Zahoor Ali; Alrajeh, Nabil; Imran, Muhammad; Vasilakos, Athanasios
2015-01-01
Appropriate network design is very significant for Underwater Wireless Sensor Networks (UWSNs). Application-oriented UWSNs are planned to achieve certain objectives. Therefore, there is always a demand for efficient data routing schemes, which can fulfill certain requirements of application-oriented UWSNs. These networks can be of any shape, i.e., rectangular, cylindrical or square. In this paper, we propose chain-based routing schemes for application-oriented cylindrical networks and also formulate mathematical models to find a global optimum path for data transmission. In the first scheme, we devise four interconnected chains of sensor nodes to perform data communication. In the second scheme, we propose routing scheme in which two chains of sensor nodes are interconnected, whereas in third scheme single-chain based routing is done in cylindrical networks. After finding local optimum paths in separate chains, we find global optimum paths through their interconnection. Moreover, we develop a computational model for the analysis of end-to-end delay. We compare the performance of the above three proposed schemes with that of Power Efficient Gathering System in Sensor Information Systems (PEGASIS) and Congestion adjusted PEGASIS (C-PEGASIS). Simulation results show that our proposed 4-chain based scheme performs better than the other selected schemes in terms of network lifetime, end-to-end delay, path loss, transmission loss, and packet sending rate. PMID:25658394
Estimating Right Atrial Pressure Using Ultrasounds: An Old Issue Revisited With New Methods.
De Vecchis, Renato; Baldi, Cesare; Giandomenico, Giuseppe; Di Maio, Marco; Giasi, Anna; Cioppa, Carmela
2016-08-01
Knowledge of the right atrial pressure (RAP) values is critical to ascertain the existence of a state of hemodynamic congestion, irrespective of the possible presence of signs and symptoms of clinical congestion and cardiac overload that can be lacking in some conditions of concealed or clinically misleading cardiac decompensation. In addition, a more reliable estimate of RAP would make it possible to determine more accurately also the systolic pulmonary arterial pressure with the only echocardiographic methods. The authors briefly illustrate some of the criteria that have been implemented to obtain a non-invasive RAP estimate, some of which have been approved by current guidelines and others are still awaiting official endorsement from the Scientific Societies of Cardiology. There is a representation of the sometimes opposing views of researchers who have studied the problem, and the prospects for development of new diagnostic criteria are outlined, in particular those derived from the matched use of two- and three-dimensional echocardiographic parameters.
Wang, Lusheng; Wang, Yamei; Ding, Zhizhong; Wang, Xiumin
2015-09-18
With the rapid development of wireless networking technologies, the Internet of Things and heterogeneous cellular networks (HCNs) tend to be integrated to form a promising wireless network paradigm for 5G. Hyper-dense sensor and mobile devices will be deployed under the coverage of heterogeneous cells, so that each of them could freely select any available cell covering it and compete for resource with others selecting the same cell, forming a cell selection (CS) game between these devices. Since different types of cells usually share the same portion of the spectrum, devices selecting overlapped cells can experience severe inter-cell interference (ICI). In this article, we study the CS game among a large amount of densely-deployed sensor and mobile devices for their uplink transmissions in a two-tier HCN. ICI is embedded with the traditional congestion game (TCG), forming a congestion game with ICI (CGI) and a congestion game with capacity (CGC). For the three games above, we theoretically find the circular boundaries between the devices selecting the macrocell and those selecting the picocells, indicated by the pure strategy Nash equilibria (PSNE). Meanwhile, through a number of simulations with different picocell radii and different path loss exponents, the collapse of the PSNE impacted by severe ICI (i.e., a large number of picocell devices change their CS preferences to the macrocell) is profoundly revealed, and the collapse points are identified.
Wang, Lusheng; Wang, Yamei; Ding, Zhizhong; Wang, Xiumin
2015-01-01
With the rapid development of wireless networking technologies, the Internet of Things and heterogeneous cellular networks (HCNs) tend to be integrated to form a promising wireless network paradigm for 5G. Hyper-dense sensor and mobile devices will be deployed under the coverage of heterogeneous cells, so that each of them could freely select any available cell covering it and compete for resource with others selecting the same cell, forming a cell selection (CS) game between these devices. Since different types of cells usually share the same portion of the spectrum, devices selecting overlapped cells can experience severe inter-cell interference (ICI). In this article, we study the CS game among a large amount of densely-deployed sensor and mobile devices for their uplink transmissions in a two-tier HCN. ICI is embedded with the traditional congestion game (TCG), forming a congestion game with ICI (CGI) and a congestion game with capacity (CGC). For the three games above, we theoretically find the circular boundaries between the devices selecting the macrocell and those selecting the picocells, indicated by the pure strategy Nash equilibria (PSNE). Meanwhile, through a number of simulations with different picocell radii and different path loss exponents, the collapse of the PSNE impacted by severe ICI (i.e., a large number of picocell devices change their CS preferences to the macrocell) is profoundly revealed, and the collapse points are identified. PMID:26393617
Agent Reward Shaping for Alleviating Traffic Congestion
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian
2006-01-01
Traffic congestion problems provide a unique environment to study how multi-agent systems promote desired system level behavior. What is particularly interesting in this class of problems is that no individual action is intrinsically "bad" for the system but that combinations of actions among agents lead to undesirable outcomes, As a consequence, agents need to learn how to coordinate their actions with those of other agents, rather than learn a particular set of "good" actions. This problem is ubiquitous in various traffic problems, including selecting departure times for commuters, routes for airlines, and paths for data routers. In this paper we present a multi-agent approach to two traffic problems, where far each driver, an agent selects the most suitable action using reinforcement learning. The agent rewards are based on concepts from collectives and aim to provide the agents with rewards that are both easy to learn and that if learned, lead to good system level behavior. In the first problem, we study how agents learn the best departure times of drivers in a daily commuting environment and how following those departure times alleviates congestion. In the second problem, we study how agents learn to select desirable routes to improve traffic flow and minimize delays for. all drivers.. In both sets of experiments,. agents using collective-based rewards produced near optimal performance (93-96% of optimal) whereas agents using system rewards (63-68%) barely outperformed random action selection (62-64%) and agents using local rewards (48-72%) performed worse than random in some instances.
Speed and Delay Prediction Models for Planning Applications
DOT National Transportation Integrated Search
1999-01-01
Estimation of vehicle speed and delay is fundamental to many forms of : transportation planning analyses including air quality, long-range travel : forecasting, major investment studies, and congestion management systems. : However, existing planning...
Qi, Yi; Padiath, Ameena; Zhao, Qun; Yu, Lei
2016-10-01
The Motor Vehicle Emission Simulator (MOVES) quantifies emissions as a function of vehicle modal activities. Hence, the vehicle operating mode distribution is the most vital input for running MOVES at the project level. The preparation of operating mode distributions requires significant efforts with respect to data collection and processing. This study is to develop operating mode distributions for both freeway and arterial facilities under different traffic conditions. For this purpose, in this study, we (1) collected/processed geographic information system (GIS) data, (2) developed a model of CO2 emissions and congestion from observations, (3) implemented the model to evaluate potential emission changes from a hypothetical roadway accident scenario. This study presents a framework by which practitioners can assess emission levels in the development of different strategies for traffic management and congestion mitigation. This paper prepared the primary input, that is, the operating mode ID distribution, required for running MOVES and developed models for estimating emissions for different types of roadways under different congestion levels. The results of this study will provide transportation planners or environmental analysts with the methods for qualitatively assessing the air quality impacts of different transportation operation and demand management strategies.
Combining path integration and remembered landmarks when navigating without vision.
Kalia, Amy A; Schrater, Paul R; Legge, Gordon E
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Combining Path Integration and Remembered Landmarks When Navigating without Vision
Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742
Transport growth in Bangkok: Energy, environment, and traffic congestion. Workshop proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philpott, J.
1995-07-01
Bangkok, the capital of Thailand, is a physically and economically complexcity with a complicated transport system. With daily traffic congestion averaging 16 hours, the air quality is such that to breathe street level pollution for 8 eight hours is roughly equivalent to smoking nine cigarettes per day. Estimates suggest idling traffic costs up to $1.6 billion annually. Energy use within the transport sector is on a steady rise with an estimated increase in 11 years of two and one half times. Severe health impacts have begun to effect many residents - young children and the elderly being particularly vulnerable. Bangkok`smore » air quality and congestion problems are far from hopeless. Great potential exists for Bangkok to remedy its transport-related problems. The city has many necessary characteristics that allow an efficient, economical system of transport. For example, its high density level makes the city a prime candidate for an efficient system of mass transit and the multitude and close proximity of shops, street vendors, restaurants, and residential areas is highly conducive to walking and cycling. Technical knowledge and capacity to devise and implement innovative policies and projects to address air quality and congestion problems is plentiful. There is also consensus among Bangkokians that something needs to be done immediately to clear the air and the roads. However, little has been done. This report proposes a new approach to transport planning for Bangkok that integrates consideration of ecological, social, and financial viability in the process of making decisions regarding managing existing infrastructure and investments in new infrastructure. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less
Performance Analysis of Stop-Skipping Scheduling Plans in Rail Transit under Time-Dependent Demand
Cao, Zhichao; Yuan, Zhenzhou; Zhang, Silin
2016-01-01
Stop-skipping is a key method for alleviating congestion in rail transit, where schedules are sometimes difficult to implement. Several mechanisms have been proposed and analyzed in the literature, but very few performance comparisons are available. This study formulated train choice behavior estimation into the model considering passengers’ perception. If a passenger’s train path can be identified, this information would be useful for improving the stop-skipping schedule service. Multi-performance is a key characteristic of our proposed five stop-skipping schedules, but quantified analysis can be used to illustrate the different effects of well-known deterministic and stochastic forms. Problems in the novel category of forms were justified in the context of a single line rather than transit network. We analyzed four deterministic forms based on the well-known A/B stop-skipping operating strategy. A stochastic form was innovatively modeled as a binary integer programming problem. We present a performance analysis of our proposed model to demonstrate that stop-skipping can feasibly be used to improve the service of passengers and enhance the elasticity of train operations under demand variations along with an explicit parametric discussion. PMID:27420087
Performance Analysis of Stop-Skipping Scheduling Plans in Rail Transit under Time-Dependent Demand.
Cao, Zhichao; Yuan, Zhenzhou; Zhang, Silin
2016-07-13
Stop-skipping is a key method for alleviating congestion in rail transit, where schedules are sometimes difficult to implement. Several mechanisms have been proposed and analyzed in the literature, but very few performance comparisons are available. This study formulated train choice behavior estimation into the model considering passengers' perception. If a passenger's train path can be identified, this information would be useful for improving the stop-skipping schedule service. Multi-performance is a key characteristic of our proposed five stop-skipping schedules, but quantified analysis can be used to illustrate the different effects of well-known deterministic and stochastic forms. Problems in the novel category of forms were justified in the context of a single line rather than transit network. We analyzed four deterministic forms based on the well-known A/B stop-skipping operating strategy. A stochastic form was innovatively modeled as a binary integer programming problem. We present a performance analysis of our proposed model to demonstrate that stop-skipping can feasibly be used to improve the service of passengers and enhance the elasticity of train operations under demand variations along with an explicit parametric discussion.
NASA Astrophysics Data System (ADS)
Davis, L. C.
2015-03-01
The Texas A&M Transportation Institute estimated that traffic congestion cost the United States 121 billion in 2011 (the latest data available). The cost is due to wasted time and fuel. In addition to accidents and road construction, factors contributing to congestion include large demand, instability of high-density free flow and selfish behavior of drivers, which produces self-organized traffic bottlenecks. Extensive data collected on instrumented highways in various countries have led to a better understanding of traffic dynamics. From these measurements, Boris Kerner and colleagues developed a new theory called three-phase theory. They identified three major phases of flow observed in the data: free flow, synchronous flow and wide moving jams. The intermediate phase is called synchronous because vehicles in different lanes tend to have similar velocities. This congested phase, characterized by lower velocities yet modestly high throughput, frequently occurs near on-ramps and lane reductions. At present there are only two widely used methods of congestion mitigation: ramp metering and the display of current travel-time information to drivers. To find more effective methods to reduce congestion, researchers perform large-scale simulations using models based on the new theories. An algorithm has been proposed to realize Wardrop equilibria with real-time route information. Such equilibria have equal travel time on alternative routes between a given origin and destination. An active area of current research is the dynamics of connected vehicles, which communicate wirelessly with other vehicles and the surrounding infrastructure. These systems show great promise for improving traffic flow and safety.
Cost estimate modeling of transportation management plans for highway projects.
DOT National Transportation Integrated Search
2012-05-01
Highway rehabilitation and reconstruction projects frequently cause road congestion and increase safety concerns while limiting access for road users. State Transportation Agencies (STAs) are challenged to find safer and more efficient ways to renew ...
Simulation and analysis of three congested weigh stations using Westa
DOT National Transportation Integrated Search
2001-01-01
A user-friendly model for personal computers, "Vehicle/Highway Performance Predictor," was developed to estimate fuel consumption and exhaust emissions related to modes of vehicle operations on highways of various configurations and traffic controls ...
Methodology update for estimating volume to service flow ratio.
DOT National Transportation Integrated Search
2015-12-01
Volume/service flow ratio (VSF) is calculated by the Highway Performance Monitoring System (HPMS) software as an indicator of peak hour congestion. It is an essential input to the Kentucky Transportation Cabinets (KYTC) key planning applications, ...
Neck Muscle Moment Arms Obtained In-Vivo from MRI: Effect of Curved and Straight Modeled Paths.
Suderman, Bethany L; Vasavada, Anita N
2017-08-01
Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). Moment arm estimates were also found to be significantly different among moment arm calculation methods for 11 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). In particular, using straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.
Cost estimate modeling of transportation management plans for highway projects : [research brief].
DOT National Transportation Integrated Search
2012-05-01
Highway rehabilitation and reconstruction projects frequently cause road congestion and increase safety concerns while limiting access for road users. State Transportation Agencies (STAs) are challenged to find safer and more efficient ways to renew ...
Estimates of Urban Roadway Congestion, 1990: Interim Report
DOT National Transportation Integrated Search
1993-03-01
This research report is the fifth year continuation of a six year research effort focused on quantifying urban mobility. This study contains the facility information for 50 urban areas throughout the country. The database used for this research conta...
Tomorrow's Transportation Market : Developing an Innovative, Seamless Transportation System
DOT National Transportation Integrated Search
2013-04-17
With the cost of congestion in the United States estimated to be in the order of $121 billion, transportation planners are under increasing pressure to improve conditions and meet projected demand increases. Harnessing emerging technologies to develo...
Real time freeway incident detection.
DOT National Transportation Integrated Search
2014-04-01
The US Department of Transportation (US-DOT) estimates that over half of all congestion : events are caused by highway incidents rather than by rush-hour traffic in big cities. Real-time : incident detection on freeways is an important part of any mo...
APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS
NASA Astrophysics Data System (ADS)
Mehran, Babak; Nakamura, Hideki
Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.
Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong
2013-01-01
Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols. PMID:24193100
Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong
2013-11-04
Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.
NASA Technical Reports Server (NTRS)
Tralli, David M.; Lichten, Stephen M.; Herring, Thomas A.
1992-01-01
Kalman filter estimates of zenith nondispersive atmospheric path delays at Westford, Massachusetts, Fort Davis, Texas, and Mojave, California, were obtained from independent analyses of data collected during January and February 1988 using the GPS and VLBI. The apparent accuracy of the path delays is inferred by examining the estimates and covariances from both sets of data. The ability of the geodetic data to resolve zenith path delay fluctuations is determined by comparing further the GPS Kalman filter estimates with corresponding wet path delays derived from water vapor radiometric data available at Mojave over two 8-hour data spans within the comparison period. GPS and VLBI zenith path delay estimates agree well within one standard deviation formal uncertainties (from 10-20 mm for GPS and 3-15 mm for VLBI) in four out of the five possible comparisons, with maximum differences of 5 and 21 mm over 8- to 12-hour data spans.
Analysis of travel-time reliability for freight corridors connecting the Pacific Northwest.
DOT National Transportation Integrated Search
2012-11-01
A new methodology and algorithms were developed to combine diverse data sources and to estimate the impacts of recurrent and non-recurrent : congestion on freight movements reliability and delays, costs, and emissions. The results suggest that tra...
Improving mobility information with better data and estimation procedures
DOT National Transportation Integrated Search
2010-03-01
The Texas Transportation Institute (TTI) continues to be a national leader in providing congestion and : mobility information. The information produced by TTI is used to communicate the issues of urban : mobility at all levels of government in the U....
DOT National Transportation Integrated Search
2016-10-03
Real-time parking availability information is important in urban areas, and if available could reduce congestion, pollution, and gas consumption. This project presents a software solution called PhonePark for detecting the availability of on-street p...
Measuring the marginal cost of congestion.
DOT National Transportation Integrated Search
2008-12-01
This study attempted to estimate the effect of additional vehicles joining the traffic stream when it is near : capacity. The study used data from highways I-35, I-45 in Texas and I-80 in California aggregated at different time : intervals. Various m...
Kinematically redundant robot manipulators
NASA Technical Reports Server (NTRS)
Baillieul, J.; Hollerbach, J.; Brockett, R.; Martin, D.; Percy, R.; Thomas, R.
1987-01-01
Research on control, design and programming of kinematically redundant robot manipulators (KRRM) is discussed. These are devices in which there are more joint space degrees of freedom than are required to achieve every position and orientation of the end-effector necessary for a given task in a given workspace. The technological developments described here deal with: kinematic programming techniques for automatically generating joint-space trajectories to execute prescribed tasks; control of redundant manipulators to optimize dynamic criteria (e.g., applications of forces and moments at the end-effector that optimally distribute the loading of actuators); and design of KRRMs to optimize functionality in congested work environments or to achieve other goals unattainable with non-redundant manipulators. Kinematic programming techniques are discussed, which show that some pseudo-inverse techniques that have been proposed for redundant manipulator control fail to achieve the goals of avoiding kinematic singularities and also generating closed joint-space paths corresponding to close paths of the end effector in the workspace. The extended Jacobian is proposed as an alternative to pseudo-inverse techniques.
Flight-path estimation in passive low-altitude flight by visual cues
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kohn, S.
1993-01-01
A series of experiments was conducted, in which subjects had to estimate the flight path while passively being flown in straight or in curved motion over several types of nominally flat, textured terrain. Three computer-generated terrain types were investigated: (1) a random 'pole' field, (2) a flat field consisting of random rectangular patches, and (3) a field of random parallelepipeds. Experimental parameters were the velocity-to-height (V/h) ratio, the viewing distance, and the terrain type. Furthermore, the effect of obscuring parts of the visual field was investigated. Assumptions were made about the basic visual-field information by analyzing the pattern of line-of-sight (LOS) rate vectors in the visual field. The experimental results support these assumptions and show that, for both a straight as well as a curved flight path, the estimation accuracy and estimation times improve with the V/h ratio. Error scores for the curved flight path are found to be about 3 deg in visual angle higher than for the straight flight path, and the sensitivity to the V/h ratio is found to be considerably larger. For the straight motion, the flight path could be estimated successfully from local areas in the far field. Curved flight-path estimates have to rely on the entire LOS rate pattern.
Path Flow Estimation Using Time Varying Coefficient State Space Model
NASA Astrophysics Data System (ADS)
Jou, Yow-Jen; Lan, Chien-Lun
2009-08-01
The dynamic path flow information is very crucial in the field of transportation operation and management, i.e., dynamic traffic assignment, scheduling plan, and signal timing. Time-dependent path information, which is important in many aspects, is nearly impossible to be obtained. Consequently, researchers have been seeking estimation methods for deriving valuable path flow information from less expensive traffic data, primarily link traffic counts of surveillance systems. This investigation considers a path flow estimation problem involving the time varying coefficient state space model, Gibbs sampler, and Kalman filter. Numerical examples with part of a real network of the Taipei Mass Rapid Transit with real O-D matrices is demonstrated to address the accuracy of proposed model. Results of this study show that this time-varying coefficient state space model is very effective in the estimation of path flow compared to time-invariant model.
Using step and path selection functions for estimating resistance to movement: Pumas as a case study
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2015-01-01
GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....
Cost Estimates For Selected California Smart Traveler Operational Tests, Volume 1, Technical Report
DOT National Transportation Integrated Search
1993-03-01
THIS REPORT ALSO COMPARES THE COSTS OF USING "SMART-TRAVELER" APPROACHES WITH THE COSTS OF EXPANDING CONVENTIONAL TRANSIT SERVICES TO REDUCE TRAFFIC CONGESTION, AIR POLLUTION AND MOBILITY PROBLEMS IN SUBURBAN AREAS, WHERE MOST PEOPLE IN U.S. METROPOL...
DOT National Transportation Integrated Search
2013-03-01
TTIs Urban Mobility Report (UMR) is acknowledged as the most authoritative source of information about traffic congestion : and its possible solutions. As policymakers from the local to national levels devise strategies to reduce greenhouse gas : ...
Impact of Personal Attitudes on Propensity to Use Autonomous Vehicles for Intercity Travel.
DOT National Transportation Integrated Search
2016-01-01
The autonomous vehicles are about to become a reality. The researchers estimate the benefits from each autonomous vehicle to be between $2000 and $4500 per vehicles. The : societal benefits include higher travel time savings, reduced congestion, fuel...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-01-01
;Contents: Volume 2: Commissioned Papers: Congestion Trends in Metropolitan Areas; Alternative Methods for Measuring Congestion Levels; Potential of Congestion Pricing in the Metropolitan Washington Region; Transportation Pricing and Travel Behavior; Peak Pricing Strategies in Transportation, Utilities, and Telecommunications: Lessons for Road Pricing; Cashing Out Employer-Paid Parking: A Precedent for Congestion Pricing; The New York Region: First in Tolls, Last in Road Pricing; Pricing Urban Roadways: Administrative and Institutional Issues; Equity and Fairness Considerations of Congestion Pricing; The Politics of Congestion Pricing; Institutional and Political Challenges in Implementing Congestion Pricing: Case Study of the San Francisco Bay Area; How Congestion Pricingmore » Came to Be Proposed in the San Diego Region: A Case History; Urban Transportation Congestion Pricing: Effects on Urban Form; Congestion Pricing and Motor Vehicle Emissions: An Initial Review; Private Toll Roads: Acceptability of Congestion Pricing in Southern California; Potential of Next-Generation Technology; Electronic Toll Collection Systems; and Impacts of Congestion Pricing on Transit and Carpool Demand and Supply.« less
Traffic congestion and reliability : trends and advanced strategies for congestion mitigation.
DOT National Transportation Integrated Search
2005-09-01
The report Traffic Congestion and Reliability: Trends and Advanced Strategies for : Congestion Mitigation provides a snapshot of congestion in the United States by : summarizing recent trends in congestion, highlighting the role of travel time : reli...
Quaassdorff, Christina; Borge, Rafael; Pérez, Javier; Lumbreras, Julio; de la Paz, David; de Andrés, Juan Manuel
2016-10-01
This paper presents the evaluation of emissions from vehicle operations in a domain of 300m×300m covering a complex urban roundabout with high traffic density in Madrid. Micro-level simulation was successfully applied to estimate the emissions on a scale of meters. Two programs were used: i) VISSIM to simulate the traffic on the square and to compute velocity-time profiles; and ii) VERSIT+micro through ENVIVER that uses VISSIM outputs to compute the related emissions at vehicle level. Data collection was achieved by a measurement campaign obtaining empirical data of vehicle flows and traffic intensities. Twelve simulations of different traffic situations (scenarios) were conducted, representing different hours from several days in a week and the corresponding NOX and PM10 emissions were estimated. The results show a general reduction on average speeds for higher intensities due to braking-acceleration patterns that contribute to increase the average emission factor and, therefore, the total emissions in the domain, especially on weekdays. The emissions are clearly related to traffic volume, although maximum emission scenario does not correspond to the highest traffic intensity due to congestion and variations in fleet composition throughout the day. These results evidence the potential that local measures aimed at alleviating congestion may have in urban areas to reduce emissions. In general, scenario-averaged emission factors estimated with the VISSIM-VERSIT+micro modelling system fitted well those from the average-speed model COPERT, used as a preliminary validation of the results. The largest deviations between these two models occur in those scenarios with more congestion. The design and resolution of the microscale modelling system allow to reflect the impact of actual traffic conditions on driving patterns and related emissions, making it useful for the design of mitigation measures for specific traffic hot-spots. Copyright © 2016 Elsevier B.V. All rights reserved.
Integrated corridor management and advanced technologies for Florida : [summary].
DOT National Transportation Integrated Search
2012-11-01
The U.S. Department of Transportation (USDOT) has estimated the costs of congestion at $200 billion a year in delayed shipments and wasted fuel and 4 billion hours lost by drivers in traffic. New roads alone cannot solve the problem because travel de...
DOT National Transportation Integrated Search
2012-02-01
A wide variety of advanced technological tools have been implemented throughout Georgias : transportation network to increase its efficiency. These systems are credited with reducing or : maintaining freeway congestion levels in light of increasin...
Operational Evaluatioin of Dynamic Weather Routes at American Airlines
NASA Technical Reports Server (NTRS)
McNally, David; Sheth, Kapil; Gong, Chester; Borchers, Paul; Osborne, Jeff; Keany, Desmond; Scott, Brennan; Smith, Steve; Sahlman, Scott; Lee, Chuhan;
2013-01-01
Dynamic Weather Routes (DWR) is a search engine that continuously and automatically analyzes inflight aircraft in en route airspace and proposes simple route amendments for more efficient routes around convective weather while considering sector congestion, traffic conflicts, and active Special Use Airspace. NASA and American Airlines (AA) are conducting an operational trial of DWR at the AA System Operations Center in Fort Worth, TX. The trial includes only AA flights in Fort Worth Center airspace. Over the period from July 31, 2012 through August 31, 2012, 45% of routes proposed by DWR and evaluated by AA users - air traffic control coordinators and flight dispatchers - were rated as acceptable as proposed or with some modifications. The wind-corrected potential flying time savings for these acceptable routes totals 470 flying min, and results suggest another 1,500 min of potential savings for flights not evaluated due to staffing limitations. A sector congestion analysis shows that in only two out of 83 DWR routes rated acceptable by AA staff were the flights predicted to fly through a congested sector inside of 30 min downstream of present position. This shows that users considered sector congestion data provided by DWR automation and in nearly all cases did not accept routes through over-capacity sectors. It is estimated that 12 AA flights were given reroute clearances as a direct result of DWR for a total savings of 67 flying min.
Relationship between xerostomia and gingival condition in young adults.
Mizutani, S; Ekuni, D; Tomofuji, T; Azuma, T; Kataoka, K; Yamane, M; Iwasaki, Y; Morita, M
2015-02-01
Xerostomia is a subjective symptom of dryness in the mouth. Although a correlation between xerostomia and oral conditions in the elderly has been reported, there are few such studies in the young adults. The aim of this study was to examine the relationship of xerostomia with the gingival condition in university students. A total of 2077 students (1202 male subjects and 875 female subjects), 18-24 years of age, were examined. The disease activity and severity of the gingival condition were assessed as the percentage of teeth with bleeding on probing (%BOP) and the presence of teeth with probing pocket depth of ≥ 4 mm, respectively. Additional information on xerostomia, oral health behaviors, coffee/tea intake and nasal congestion was collected via a questionnaire. Path analysis was used to test pathways from xerostomia to the gingival condition. One-hundred and eighty-three (8.8%) students responded that their mouths frequently or always felt dry. Xerostomia was related to %BOP and dental plaque formation, but was not related to the presence of probing pocket depth ≥ 4 mm. In the structural model, xerostomia was related to dental plaque formation (p < 0.01), and a lower level of dental plaque formation was associated with a lower %BOP. Xerostomia was associated with coffee/tea intake (p < 0.01) and nasal congestion (p < 0.001). Xerostomia was indirectly related to gingival disease activity through the accumulation of dental plaque. Nasal congestion and coffee/tea intake also affected xerostomia. These findings suggest that xerostomia should be considered in screening for gingivitis risk in young adults. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Analysis of Random Drop for Gateway Congestion Control
1989-11-01
effective congest)on control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to unfair service to the...early application of the policy removes the pressure of congestion relief and allows more accurate signaling of congestion. ’ To be used effectively ...prompted the need for more effective congestion control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Kareem Jebur, Tuka
2018-05-01
Mobile ad hoc networks (MANETs) play a critical role in today’s wireless ad hoc network research and consist of active nodes that can be in motion freely. Because it consider very important problem in this network, we suggested proposed method based on modified radial basis function networks RBFN and Self-Organizing Map SOM. These networks can be improved by the use of clusters because of huge congestion in the whole network. In such a system, the performance of MANET is improved by splitting the whole network into various clusters using SOM. The performance of clustering is improved by the cluster head selection and number of clusters. Modified Radial Based Neural Network is very simple, adaptable and efficient method to increase the life time of nodes, packet delivery ratio and the throughput of the network will increase and connection become more useful because the optimal path has the best parameters from other paths including the best bitrate and best life link with minimum delays. Proposed routing algorithm depends on the group of factors and parameters to select the path between two points in the wireless network. The SOM clustering average time (1-10 msec for stall nodes) and (8-75 msec for mobile nodes). While the routing time range (92-510 msec).The proposed system is faster than the Dijkstra by 150-300%, and faster from the RBFNN (without modify) by 145-180%.
Open-path FTIR ozone measurements in Korea
NASA Astrophysics Data System (ADS)
Walter, William T.; Perry, Stephen H.; Han, Jin-Seok; Park, Chul-Jin
1999-02-01
In July 1997 the Republic of Korea became the 15th country to exceed 10-million registered motor vehicles. The number of cars has been increasing exponentially in Korea for the past 12 years opening an era of one car per household in this nation with a population of 44 million. The air quality effects of the growth of increasingly congested motor vehicle traffic in Seoul, home to more than one-fourth of the entire population, is of great concern to Korea's National Institute of Environmental Research (NIER). AIL's Open-Path FTIR air quality monitor, RAM 2000TM, has been used to quantify the ozone increase over the course of a warm summer day. The RAM 2000 instrument was setup on the roof of the 6-story NIER headquarters. The retroreflector was sited 180-m away across a major highway where it was tripod-mounted on top of the 6- story Korean National Institute of Health facility. During the Open-Path FTIR data taking, NIER Air Physics Division research team periodically tethered an airborne balloon containing pump and a potassium iodide solution to obtain absolute ozone concentration results which indicated that the ambient ozone level was 50 ppb when the Open-Path FTIR measurements began. Total ozone concentrations exceeded 120 ppb for five hours between 11:30 AM and 4:30 PM. The peak ozone concentration measured was 199 ppb at 12:56 PM. The averaged concentration for five and a half hours of data collection was 145 ppb. Ammonia concentrations were also measured.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-21
... light of the DOT's strategic goals (safety, reduced congestion, global connectivity, environmental... issues, and Provide national estimates of transportation mode usage. Each version of the OHS will focus... owned vehicle; Taxi; Rail transit (subway, streetcar, or light rail); Commuter rail; Water...
DOT National Transportation Integrated Search
2017-12-01
In designing an effective traffic management plan for non-recurrent congestion, it is critical for responsible highway agencies to have some vital information, such as estimated incident duration, resulting traffic queues, and the expected delays. Ov...
DOT National Transportation Integrated Search
2000-05-01
It has been estimated that 57 percent of the nation?s traffic congestion is due to crashes and other incidents. Organized traffic incident management is the primary tool in mitigating the impact. Traffic incident management involves multi-agency, mul...
Tornado Intensity Estimated from Damage Path Dimensions
Elsner, James B.; Jagger, Thomas H.; Elsner, Ian J.
2014-01-01
The Newcastle/Moore and El Reno tornadoes of May 2013 are recent reminders of the destructive power of tornadoes. A direct estimate of a tornado's power is difficult and dangerous to get. An indirect estimate on a categorical scale is available from a post-storm survery of the damage. Wind speed bounds are attached to the scale, but the scale is not adequate for analyzing trends in tornado intensity separate from trends in tornado frequency. Here tornado intensity on a continuum is estimated from damage path length and width, which are measured on continuous scales and correlated to the EF rating. The wind speeds on the EF scale are treated as interval censored data and regressed onto the path dimensions and fatalities. The regression model indicates a 25% increase in expected intensity over a threshold intensity of 29 m s−1 for a 100 km increase in path length and a 17% increase in expected intensity for a one km increase in path width. The model shows a 43% increase in the expected intensity when fatalities are observed controlling for path dimensions. The estimated wind speeds correlate at a level of .77 (.34, .93) [95% confidence interval] with a small sample of wind speeds estimated independently from a doppler radar calibration. The estimated wind speeds allow analyses to be done on the tornado database that are not possible with the categorical scale. The modeled intensities can be used in climatology and in environmental and engineering applications. Research is needed to understand the upward trends in path length and width. PMID:25229242
Tornado intensity estimated from damage path dimensions.
Elsner, James B; Jagger, Thomas H; Elsner, Ian J
2014-01-01
The Newcastle/Moore and El Reno tornadoes of May 2013 are recent reminders of the destructive power of tornadoes. A direct estimate of a tornado's power is difficult and dangerous to get. An indirect estimate on a categorical scale is available from a post-storm survery of the damage. Wind speed bounds are attached to the scale, but the scale is not adequate for analyzing trends in tornado intensity separate from trends in tornado frequency. Here tornado intensity on a continuum is estimated from damage path length and width, which are measured on continuous scales and correlated to the EF rating. The wind speeds on the EF scale are treated as interval censored data and regressed onto the path dimensions and fatalities. The regression model indicates a 25% increase in expected intensity over a threshold intensity of 29 m s(-1) for a 100 km increase in path length and a 17% increase in expected intensity for a one km increase in path width. The model shows a 43% increase in the expected intensity when fatalities are observed controlling for path dimensions. The estimated wind speeds correlate at a level of .77 (.34, .93) [95% confidence interval] with a small sample of wind speeds estimated independently from a doppler radar calibration. The estimated wind speeds allow analyses to be done on the tornado database that are not possible with the categorical scale. The modeled intensities can be used in climatology and in environmental and engineering applications. Research is needed to understand the upward trends in path length and width.
Comparing Perceptions and Measures of Congestion
DOT National Transportation Integrated Search
2012-10-01
Peoples perception of congestion and the actual measured congestion do not always agree. Measured : congestion relates to the delay resulting from field measurements of traffic volume, speed, and travel : time. Peoples perception of congestion ...
A computer simulation of aircraft evacuation with fire
NASA Technical Reports Server (NTRS)
Middleton, V. E.
1983-01-01
A computer simulation was developed to assess passenger survival during the post-crash evacuation of a transport category aircraft when fire is a major threat. The computer code, FIREVAC, computes individual passenger exit paths and times to exit, taking into account delays and congestion caused by the interaction among the passengers and changing cabin conditions. Simple models for the physiological effects of the toxic cabin atmosphere are included with provision for including more sophisticated models as they become available. Both wide-body and standard-body aircraft may be simulated. Passenger characteristics are assigned stochastically from experimentally derived distributions. Results of simulations of evacuation trials and hypothetical evacuations under fire conditions are presented.
Quiet Short-Haul Research Airplane (QSRA) model select panel functional description
NASA Technical Reports Server (NTRS)
Watson, D. M.
1982-01-01
The QSRA, when equipped with programmable color cathode ray tube displays, a head up display, a general purpose digital computer and a microwave landing system receiver, will provide a capability to do handling qualities studies and terminal area operating systems experiments as well as to enhance an experimenter's ability to obtain repeatable aircraft performance data. The operating systems experiments include the capability to generate minimum fuel approach and departure paths and to conduct precision approaches to a STOLport runway. The mode select panel is designed to provide both the flexibility needed for a variety of flight test experiments and the minimum workload operation required by pilots flying into congested terminal traffic areas.
Delivering Faster Congestion Feedback with the Mark-Front Strategy
NASA Technical Reports Server (NTRS)
Liu, Chunlei; Jain, Raj
2001-01-01
Computer networks use congestion feedback from the routers and destinations to control the transmission load. Delivering timely congestion feedback is essential to the performance of networks. Reaction to the congestion can be more effective if faster feedback is provided. Current TCP/IP networks use timeout, duplicate Acknowledgement Packets (ACKs) and explicit congestion notification (ECN) to deliver the congestion feedback, each provides a faster feedback than the previous method. In this paper, we propose a markfront strategy that delivers an even faster congestion feedback. With analytical and simulation results, we show that mark-front strategy reduces buffer size requirement, improves link efficiency and provides better fairness among users. Keywords: Explicit Congestion Notification, mark-front, congestion control, buffer size requirement, fairness.
A hierarchical framework for air traffic control
NASA Astrophysics Data System (ADS)
Roy, Kaushik
Air travel in recent years has been plagued by record delays, with over $8 billion in direct operating costs being attributed to 100 million flight delay minutes in 2007. Major contributing factors to delay include weather, congestion, and aging infrastructure; the Next Generation Air Transportation System (NextGen) aims to alleviate these delays through an upgrade of the air traffic control system. Changes to large-scale networked systems such as air traffic control are complicated by the need for coordinated solutions over disparate temporal and spatial scales. Individual air traffic controllers must ensure aircraft maintain safe separation locally with a time horizon of seconds to minutes, whereas regional plans are formulated to efficiently route flows of aircraft around weather and congestion on the order of every hour. More efficient control algorithms that provide a coordinated solution are required to safely handle a larger number of aircraft in a fixed amount of airspace. Improved estimation algorithms are also needed to provide accurate aircraft state information and situational awareness for human controllers. A hierarchical framework is developed to simultaneously solve the sometimes conflicting goals of regional efficiency and local safety. Careful attention is given in defining the interactions between the layers of this hierarchy. In this way, solutions to individual air traffic problems can be targeted and implemented as needed. First, the regional traffic flow management problem is posed as an optimization problem and shown to be NP-Hard. Approximation methods based on aggregate flow models are developed to enable real-time implementation of algorithms that reduce the impact of congestion and adverse weather. Second, the local trajectory design problem is solved using a novel slot-based sector model. This model is used to analyze sector capacity under varying traffic patterns, providing a more comprehensive understanding of how increased automation in NextGen will affect the overall performance of air traffic control. The dissertation also provides solutions to several key estimation problems that support corresponding control tasks. Throughout the development of these estimation algorithms, aircraft motion is modeled using hybrid systems, which encapsulate both the discrete flight mode of an aircraft and the evolution of continuous states such as position and velocity. The target-tracking problem is posed as one of hybrid state estimation, and two new algorithms are developed to exploit structure specific to aircraft motion, especially near airports. First, discrete mode evolution is modeled using state-dependent transitions, in which the likelihood of changing flight modes is dependent on aircraft state. Second, an estimator is designed for systems with limited mode changes, including arrival aircraft. Improved target tracking facilitates increased safety in collision avoidance and trajectory design problems. A multiple-target tracking and identity management algorithm is developed to improve situational awareness for controllers about multiple maneuvering targets in a congested region. Finally, tracking algorithms are extended to predict aircraft landing times; estimated time of arrival prediction is one example of important decision support information for air traffic control.
Investigating the effect of freeway congestion thresholds on decision-making inputs.
DOT National Transportation Integrated Search
2010-05-01
Congestion threshold is embedded in the congestion definition. Two basic approaches exist in : current practice for setting the congestion threshold. One common approach uses the free-flow or : unimpeded conditions as the congestion threshold. ...
DOT National Transportation Integrated Search
2012-07-01
Freight delay is detrimental to the national economy. In an effort to gauge the economic impact of freight delay due : to highway congestion, this project focuses on estimating shippers value of delay (VOD). We have accomplished : this through thr...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-02
... transportation system in light of the DOT's strategic goals (safety, reduced congestion, global connectivity... transportation issues, and provide national estimates of transportation mode usage. Each version of the OHS will... transit (subway, streetcar, or light rail) Commuter rail Water transportation (taxis, ferries, ships...
DOT National Transportation Integrated Search
2003-05-06
Congestion pricing can potentially reduce congestion by providing incentives for drivers to shift trips to off-peak periods, use less congested routes, or use alternative modes, thereby spreading out demand for available transportation infrastructure...
Understanding the topological characteristics and flow complexity of urban traffic congestion
NASA Astrophysics Data System (ADS)
Wen, Tzai-Hung; Chin, Wei-Chien-Benny; Lai, Pei-Chun
2017-05-01
For a growing number of developing cities, the capacities of streets cannot meet the rapidly growing demand of cars, causing traffic congestion. Understanding the spatial-temporal process of traffic flow and detecting traffic congestion are important issues associated with developing sustainable urban policies to resolve congestion. Therefore, the objective of this study is to propose a flow-based ranking algorithm for investigating traffic demands in terms of the attractiveness of street segments and flow complexity of the street network based on turning probability. Our results show that, by analyzing the topological characteristics of streets and volume data for a small fraction of street segments in Taipei City, the most congested segments of the city were identified successfully. The identified congested segments are significantly close to the potential congestion zones, including the officially announced most congested streets, the segments with slow moving speeds at rush hours, and the areas near significant landmarks. The identified congested segments also captured congestion-prone areas concentrated in the business districts and industrial areas of the city. Identifying the topological characteristics and flow complexity of traffic congestion provides network topological insights for sustainable urban planning, and these characteristics can be used to further understand congestion propagation.
Network congestion control algorithm based on Actor-Critic reinforcement learning model
NASA Astrophysics Data System (ADS)
Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen
2018-04-01
Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.
Forearm vasodilatation following release of venous congestion
Caro, C. G.; Foley, T. H.; Sudlow, M. F.
1970-01-01
1. The volume rate of forearm blood flow was measured with a mercury-in-rubber strain gauge, or with a water-filled plethysmograph, from 1 sec after termination of a 2-3 min period of venous congestion. 2. When congesting pressure had been less than 18 mm Hg, average post-congestion flow (five subjects) was constant during approx. 10 sec and not significantly different from resting flow. 3. When congesting pressure had been 30 mm Hg, average post-congestion flow (eight subjects) was 26% higher than resting, during 3-4 sec after release of congestion, but rose to 273% of resting during 4-6 sec after release of congestion. 4. In other studies forearm vascular resistance had been found normal or increased during such venous congestion, and theoretical studies here indicated that passive mechanical factors could not account for the delayed occurrence of high post-congestion flow. 5. It appears, therefore, that the forearm vascular bed dilates actively shortly after release of substantial venous congestion. It would seem more likely that a myogenic mechanism, rather than a metabolic one, is responsible. PMID:5532541
Multiplex networks in metropolitan areas: generic features and local effects.
Strano, Emanuele; Shai, Saray; Dobson, Simon; Barthelemy, Marc
2015-10-06
Most large cities are spanned by more than one transportation system. These different modes of transport have usually been studied separately: it is however important to understand the impact on urban systems of coupling different modes and we report in this paper an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York. We observe a similar behaviour for network quantities related to quickest paths suggesting the existence of generic mechanisms operating beyond the local peculiarities of the specific cities studied. An analysis of the betweenness centrality distribution shows that the introduction of underground networks operate as a decentralizing force creating congestion in places located at the end of underground lines. Also, we find that increasing the speed of subways is not always beneficial and may lead to unwanted uneven spatial distributions of accessibility. In fact, for London—but not for New York—there is an optimal subway speed in terms of global congestion. These results show that it is crucial to consider the full, multimodal, multilayer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions. © 2015 The Author(s).
Using temporal detrending to observe the spatial correlation of traffic.
Ermagun, Alireza; Chatterjee, Snigdhansu; Levinson, David
2017-01-01
This empirical study sheds light on the spatial correlation of traffic links under different traffic regimes. We mimic the behavior of real traffic by pinpointing the spatial correlation between 140 freeway traffic links in a major sub-network of the Minneapolis-St. Paul freeway system with a grid-like network topology. This topology enables us to juxtapose the positive and negative correlation between links, which has been overlooked in short-term traffic forecasting models. To accurately and reliably measure the correlation between traffic links, we develop an algorithm that eliminates temporal trends in three dimensions: (1) hourly dimension, (2) weekly dimension, and (3) system dimension for each link. The spatial correlation of traffic links exhibits a stronger negative correlation in rush hours, when congestion affects route choice. Although this correlation occurs mostly in parallel links, it is also observed upstream, where travelers receive information and are able to switch to substitute paths. Irrespective of the time-of-day and day-of-week, a strong positive correlation is witnessed between upstream and downstream links. This correlation is stronger in uncongested regimes, as traffic flow passes through consecutive links more quickly and there is no congestion effect to shift or stall traffic. The extracted spatial correlation structure can augment the accuracy of short-term traffic forecasting models.
Using temporal detrending to observe the spatial correlation of traffic
2017-01-01
This empirical study sheds light on the spatial correlation of traffic links under different traffic regimes. We mimic the behavior of real traffic by pinpointing the spatial correlation between 140 freeway traffic links in a major sub-network of the Minneapolis—St. Paul freeway system with a grid-like network topology. This topology enables us to juxtapose the positive and negative correlation between links, which has been overlooked in short-term traffic forecasting models. To accurately and reliably measure the correlation between traffic links, we develop an algorithm that eliminates temporal trends in three dimensions: (1) hourly dimension, (2) weekly dimension, and (3) system dimension for each link. The spatial correlation of traffic links exhibits a stronger negative correlation in rush hours, when congestion affects route choice. Although this correlation occurs mostly in parallel links, it is also observed upstream, where travelers receive information and are able to switch to substitute paths. Irrespective of the time-of-day and day-of-week, a strong positive correlation is witnessed between upstream and downstream links. This correlation is stronger in uncongested regimes, as traffic flow passes through consecutive links more quickly and there is no congestion effect to shift or stall traffic. The extracted spatial correlation structure can augment the accuracy of short-term traffic forecasting models. PMID:28472093
Multiplex networks in metropolitan areas: generic features and local effects
Strano, Emanuele; Shai, Saray; Dobson, Simon; Barthelemy, Marc
2015-01-01
Most large cities are spanned by more than one transportation system. These different modes of transport have usually been studied separately: it is however important to understand the impact on urban systems of coupling different modes and we report in this paper an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York. We observe a similar behaviour for network quantities related to quickest paths suggesting the existence of generic mechanisms operating beyond the local peculiarities of the specific cities studied. An analysis of the betweenness centrality distribution shows that the introduction of underground networks operate as a decentralizing force creating congestion in places located at the end of underground lines. Also, we find that increasing the speed of subways is not always beneficial and may lead to unwanted uneven spatial distributions of accessibility. In fact, for London—but not for New York—there is an optimal subway speed in terms of global congestion. These results show that it is crucial to consider the full, multimodal, multilayer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions. PMID:26400198
Research on Urban Road Traffic Congestion Charging Based on Sustainable Development
NASA Astrophysics Data System (ADS)
Ye, Sun
Traffic congestion is a major problem which bothers our urban traffic sustainable development at present. Congestion charging is an effective measure to alleviate urban traffic congestion. The paper first probes into several key issues such as the goal, the pricing, the scope, the method and the redistribution of congestion charging from theoretical angle. Then it introduces congestion charging practice in Singapore and London and draws conclusion and suggestion that traffic congestion charging should take scientific plan, support of public, public transportation development as the premise.
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.
Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less
Wattad, Malak; Darawsha, Wisam; Solomonica, Amir; Hijazi, Maher; Kaplan, Marielle; Makhoul, Badira F; Abassi, Zaid A; Azzam, Zaher S; Aronson, Doron
2015-04-01
Worsening renal function (WRF) and congestion are inextricably related pathophysiologically, suggesting that WRF occurring in conjunction with persistent congestion would be associated with worse clinical outcome. We studied the interdependence between WRF and persistent congestion in 762 patients with acute decompensated heart failure (HF). WRF was defined as ≥0.3 mg/dl increase in serum creatinine above baseline at any time during hospitalization and persistent congestion as ≥1 sign of congestion at discharge. The primary end point was all-cause mortality with mean follow-up of 15 ± 9 months. Readmission for HF was a secondary end point. Persistent congestion was more common in patients with WRF than in patients with stable renal function (51.0% vs 26.6%, p <0.0001). Both persistent congestion and persistent WRF were significantly associated with mortality (both p <0.0001). There was a strong interaction (p = 0.003) between persistent WRF and congestion, such that the increased risk for mortality occurred predominantly with both WRF and persistent congestion. The adjusted hazard ratio for mortality in patients with persistent congestion as compared with those without was 4.16 (95% confidence interval [CI] 2.20 to 7.86) in patients with WRF and 1.50 (95% CI 1.16 to 1.93) in patients without WRF. In conclusion, persisted congestion is frequently associated with WRF. We have identified a substantial interaction between persistent congestion and WRF such that congestion portends increased mortality particularly when associated with WRF. Copyright © 2015 Elsevier Inc. All rights reserved.
Price of anarchy on heterogeneous traffic-flow networks
NASA Astrophysics Data System (ADS)
Rose, A.; O'Dea, R.; Hopcraft, K. I.
2016-09-01
The efficiency of routing traffic through a network, comprising nodes connected by links whose cost of traversal is either fixed or varies in proportion to volume of usage, can be measured by the "price of anarchy." This is the ratio of the cost incurred by agents who act to minimize their individual expenditure to the optimal cost borne by the entire system. As the total traffic load and the network variability—parameterized by the proportion of variable-cost links in the network—changes, the behaviors that the system presents can be understood with the introduction of a network of simpler structure. This is constructed from classes of nonoverlapping paths connecting source to destination nodes that are characterized by the number of variable-cost edges they contain. It is shown that localized peaks in the price of anarchy occur at critical traffic volumes at which it becomes beneficial to exploit ostensibly more expensive paths as the network becomes more congested. Simulation results verifying these findings are presented for the variation of the price of anarchy with the network's size, aspect ratio, variability, and traffic load.
Price of anarchy on heterogeneous traffic-flow networks.
Rose, A; O'Dea, R; Hopcraft, K I
2016-09-01
The efficiency of routing traffic through a network, comprising nodes connected by links whose cost of traversal is either fixed or varies in proportion to volume of usage, can be measured by the "price of anarchy." This is the ratio of the cost incurred by agents who act to minimize their individual expenditure to the optimal cost borne by the entire system. As the total traffic load and the network variability-parameterized by the proportion of variable-cost links in the network-changes, the behaviors that the system presents can be understood with the introduction of a network of simpler structure. This is constructed from classes of nonoverlapping paths connecting source to destination nodes that are characterized by the number of variable-cost edges they contain. It is shown that localized peaks in the price of anarchy occur at critical traffic volumes at which it becomes beneficial to exploit ostensibly more expensive paths as the network becomes more congested. Simulation results verifying these findings are presented for the variation of the price of anarchy with the network's size, aspect ratio, variability, and traffic load.
Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua
2018-01-01
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797
Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua
2018-05-06
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.
The Airspace Concepts Evaluation System Architecture and System Plant
NASA Technical Reports Server (NTRS)
Windhorst, Robert; Meyn, Larry; Manikonda, Vikram; Carlos, Patrick; Capozzi, Brian
2006-01-01
The Airspace Concepts Evaluation System is a simulation of the National Airspace System. It includes models of flights, airports, airspaces, air traffic controls, traffic flow managements, and airline operation centers operating throughout the United States. It is used to predict system delays in response to future capacity and demand scenarios and perform benefits assessments of current and future airspace technologies and operational concepts. Facilitation of these studies requires that the simulation architecture supports plug and play of different air traffic control, traffic flow management, and airline operation center models and multi-fidelity modeling of flights, airports, and airspaces. The simulation is divided into two parts that are named, borrowing from classical control theory terminology, control and plant. The control consists of air traffic control, traffic flow management, and airline operation center models, and the plant consists of flight, airport, and airspace models. The plant can run open loop, in the absence of the control. However, undesired affects, such as conflicts and over congestions in the airspaces and airports, can occur. Different controls are applied, "plug and played", to the plant. A particular control is evaluated by analyzing how well it managed conflicts and congestions. Furthermore, the terminal area plants consist of models of airports and terminal airspaces. Each model consists of a set of nodes and links which are connected by the user to form a network. Nodes model runways, fixes, taxi intersections, gates, and/or other points of interest, and links model taxiways, departure paths, and arrival paths. Metering, flow distribution, and sequencing functions can be applied at nodes. Different fidelity model of how a flight transits are can be used by links. The fidelity of the model can be adjusted by the user by either changing the complexity of the node/link network-or the way that the link models how the flights transit from one node to the other.
Reducing a congestion with introduce the greedy algorithm on traffic light control
NASA Astrophysics Data System (ADS)
Catur Siswipraptini, Puji; Hendro Martono, Wisnu; Hartanti, Dian
2018-03-01
The density of vehicles causes congestion seen at every junction in the city of jakarta due to the static or manual traffic timing lamp system consequently the length of the queue at the junction is uncertain. The research has been aimed at designing a sensor based traffic system based on the queue length detection of the vehicle to optimize the duration of the green light. In detecting the length of the queue of vehicles using infrared sensor assistance placed in each intersection path, then apply Greedy algorithm to help accelerate the movement of green light duration for the path that requires, while to apply the traffic lights regulation program based on greedy algorithm which is then stored on microcontroller with Arduino Mega 2560 type. Where a developed system implements the greedy algorithm with the help of the infrared sensor it will extend the duration of the green light on the long vehicle queue and accelerate the duration of the green light at the intersection that has the queue not too dense. Furthermore, the design is made to form an artificial form of the actual situation of the scale model or simple simulator (next we just called as scale model of simulator) of the intersection then tested. Sensors used are infrared sensors, where the placement of sensors in each intersection on the scale model is placed within 10 cm of each sensor and serves as a queue detector. From the results of the test process on the scale model with a longer queue obtained longer green light time so it will fix the problem of long queue of vehicles. Using greedy algorithms can add long green lights for 2 seconds on tracks that have long queues at least three sensor levels and accelerate time at other intersections that have longer queue sensor levels less than level three.
Delay-based virtual congestion control in multi-tenant datacenters
NASA Astrophysics Data System (ADS)
Liu, Yuxin; Zhu, Danhong; Zhang, Dong
2018-03-01
With the evolution of cloud computing and virtualization, the congestion control of virtual datacenters has become the basic issue for multi-tenant datacenters transmission. Regarding to the friendly conflict of heterogeneous congestion control among multi-tenant, this paper proposes a delay-based virtual congestion control, which translates the multi-tenant heterogeneous congestion control into delay-based feedback uniformly by setting the hypervisor translation layer, modifying three-way handshake of explicit feedback and packet loss feedback and throttling receive window. The simulation results show that the delay-based virtual congestion control can effectively solve the unfairness of heterogeneous feedback congestion control algorithms.
Channel Modeling of Miniaturized Battery-Powered Capacitive Human Body Communication Systems.
Park, Jiwoong; Garudadri, Harinath; Mercier, Patrick P
2017-02-01
The purpose of this contribution is to estimate the path loss of capacitive human body communication (HBC) systems under practical conditions. Most prior work utilizes large grounded instruments to perform path loss measurements, resulting in overly optimistic path loss estimates for wearable HBC devices. In this paper, small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions. A hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems. Measurements from form-factor-accurate prototypes reveal path loss results between 31.7 and 42.2 dB from 20 to 150 MHz. Simulation results matched measurements within 2.5 dB. Comeasurements using large grounded benchtop vector network analyzer (VNA) and large battery-powered spectrum analyzer (SA) underestimate path loss by up to 33.6 and 8.2 dB, respectively. Measurements utilizing a VNA with baluns, or large battery-powered SAs with baluns still underestimate path loss by up to 24.3 and 6.7 dB, respectively. Measurements of path loss in capacitive HBC systems strongly depend on instrumentation configurations. It is thus imperative to simulate or measure path loss in capacitive HBC systems utilizing realistic geometries and grounding configurations. HBC has a great potential for many emerging wearable devices and applications; accurate path loss estimation will improve system-level design leading to viable products.
Bragg peak prediction from quantitative proton computed tomography using different path estimates
Wang, Dongxu; Mackie, T Rockwell
2015-01-01
This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472
NASA Technical Reports Server (NTRS)
Cole, Robert; Wear, Mary; Young, Millennia; Cobel, Christopher; Mason, Sara
2017-01-01
Congestion is commonly reported during spaceflight, and most crewmembers have reported using medications for congestion during International Space Station (ISS) missions. Although congestion has been attributed to fluid shifts during spaceflight, fluid status reaches equilibrium during the first week after launch while congestion continues to be reported throughout long duration missions. Congestion complaints have anecdotally been reported in relation to ISS CO2 levels; this evaluation was undertaken to determine whether or not an association exists. METHODS: Reported headaches, congestion symptoms, and CO2 levels were obtained for ISS expeditions 2-31, and time-weighted means and single-point maxima were determined for 24-hour (24hr) and 7-day (7d) periods prior to each weekly private medical conference. Multiple imputation addressed missing data, and logistic regression modeled the relationship between probability of reported event of congestion or headache and CO2 levels, adjusted for possible confounding covariates. The first seven days of spaceflight were not included to control for fluid shifts. Data were evaluated to determine the concentration of CO2 required to maintain the risk of congestion below 1% to allow for direct comparison with a previously published evaluation of CO2 concentrations and headache. RESULTS: This study confirmed a previously identified significant association between CO2 and headache and also found a significant association between CO2 and congestion. For each 1-mm Hg increase in CO2, the odds of a crew member reporting congestion doubled. The average 7-day CO2 would need to be maintained below 1.5 mmHg to keep the risk of congestion below 1%. The predicted probability curves of ISS headache and congestion curves appear parallel when plotted against ppCO2 levels with congestion occurring at approximately 1mmHg lower than a headache would be reported. DISCUSSION: While the cause of congestion is multifactorial, this study showed congestion is associated with CO2 levels on ISS. Data from additional expeditions could be incorporated to further assess this finding. CO2 levels are also associated with reports of headaches on ISS. While it may be expected for astronauts with congestion to also complain of headaches, these two symptoms are commonly mutually exclusive. Furthermore, it is unknown if a temporal CO2 relationship exists between congestion and headache on ISS. CO2 levels were time-weighted for 24hr and 7d, and thus the time course of congestion leading to headache was not assessed; however, congestion could be an early CO2-related symptom when compared to headache. Future studies evaluating the association of CO2-related congestion leading to headache would be difficult due to the relatively stable daily CO2 levels on ISS currently, but a systematic study could be implemented on-orbit if desired.
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit
2010-01-01
The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.
NASA Technical Reports Server (NTRS)
Moran, J. M.; Rosen, B. R.
1980-01-01
The uncertainity in propagation delay estimates is due primarily to tropospheric water, the total amount and vertical distribution of which is variable. Because water vapor both delays and attenuates microwave signals, the propagation delay, or wet path length, can be estimated from the microwave brightness temperature near the 22.235 GHz transition of water vapor. The data from a total of 240 radiosonde launches taken simultaneously were analyzed. Estimates of brightness temperature at 19 and 22 GHz and wet path length were made from these data. The wet path length in the zenith direction could be estimated from the surface water vapor density to an accuracy of 5 cm for the summer data and 2 cm for winter data. Using the brightness temperatures, the wet path could be estimated to an accuracy of 0.3 cm. Two dual frequency radiometers were refurbished in order to test these techniques. These radiometers were capable of measuring the difference in the brightness temperature at 30 deg elevation angle and at the zenith to an accuracy of about 1 K. In August 1975, 45 radiosondes were launched over an 11 day period. Brightness temperature measurements were made simultaneously at 19 and 22 GHz with the radiometers. The rms error for the estimation of wet path length from surface meteorological parameters was 3.2 cm, and from the radiometer brightness temperatures, 1.5 cm.
NASA Astrophysics Data System (ADS)
Tone, Tetsuya; Kohara, Kazuhiro
We have investigated ways to reduce congestion in a theme park with multi-agents. We constructed a theme park model called Digital Park 1.0 with twenty-three attractions similar in form to Tokyo Disney Sea. We consider not only congestion information (number of vistors standing in line at each attraction) but also the advantage of a priority boarding pass, like Fast Pass which is used at Tokyo Disney Sea. The congestion-information-usage ratio, which reflects the ratio of visitors who behave according to congestion information, was changed from 0% to 100% in both models, with and without priority boarding pass. The “mean stay time of visitors" is a measure of satisfaction. The smaller mean stay time, the larger degree of satisfaction. Here, a short stay time means a short wait time. The resluts of each simulation are averaged over ten trials. The main results are as follows. (1) When congestion-information-usage ratio increased, the mean stay time decreases. When 20% of visitors behaved according to congestion information, the mean stay time was reduced by 30%. (2) A priority boarding pass reduced congestion, and mean stay time was reduced by 15%. (3) When visitors used congestion information and a priority boarding pass, mean stay time was further reduced. When the congestion-information-usage ratio was 20%, mean stay time was reduced by 35%. (4) When congestion-information-usage ratio was over 50%, the congestion reduction effects reached saturation.
Mecklai, Alicia; Subačius, Haris; Konstam, Marvin A; Gheorghiade, Mihai; Butler, Javed; Ambrosy, Andrew P; Katz, Stuart D
2016-07-01
The aim of this study was to characterize the association between decongestion therapy and 30-day outcomes in patients hospitalized for heart failure (HF). Loop diuretic agents are commonly prescribed for the treatment of symptomatic congestion in patients hospitalized for HF, but the association between loop diuretic agent dose response and post-discharge outcomes has not been well characterized. Cox proportional hazards models were used to estimate the association among average loop diuretic agent dose, congestion status at discharge, and 30-day post-discharge all-cause mortality and HF rehospitalization in 3,037 subjects hospitalized with worsening HF enrolled in the EVEREST (Efficacy of Vasopressin Antagonism in Heart Failure: Outcome Study With Tolvaptan) study. In univariate analysis, subjects exposed to high-dose diuretic agents (≥160 mg/day) had greater risk for the combined outcome than subjects exposed to low-dose diuretic agents (18.9% vs. 10.0%; hazard ratio: 2.00; 95% confidence interval: 1.64 to 2.46; p < 0.0001). After adjustment for pre-specified covariates of disease severity, the association between diuretic agent dose and outcomes was not significant (hazard ratio: 1.11; 95% confidence interval: 0.89 to 1.38; p = 0.35). Of the 3,011 subjects with clinical assessments of volume status, 2,063 (69%) had little or no congestion at hospital discharge. Congestion status at hospital discharge did not modify the association between diuretic agent exposure and the combined endpoint (p for interaction = 0.84). Short-term diuretic agent exposure during hospital treatment for worsening HF was not an independent predictor of 30-day all-cause mortality and HF rehospitalization in multivariate analysis. Congestion status at discharge did not modify the association between diuretic agent dose and clinical outcomes. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263
Dynamic, stochastic models for congestion pricing and congestion securities.
DOT National Transportation Integrated Search
2010-12-01
This research considers congestion pricing under demand uncertainty. In particular, a robust optimization (RO) approach is applied to optimal congestion pricing problems under user equilibrium. A mathematical model is developed and an analysis perfor...
Assessment of the Performance of a Dual-Frequency Surface Reference Technique
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang; Tanelli, Simone; Durden, Stephen
2013-01-01
The high correlation of the rain-free surface cross sections at two frequencies implies that the estimate of differential path integrated attenuation (PIA) caused by precipitation along the radar beam can be obtained to a higher degree of accuracy than the path-attenuation at either frequency. We explore this finding first analytically and then by examining data from the JPL dual-frequency airborne radar using measurements from the TC4 experiment obtained during July-August 2007. Despite this improvement in the accuracy of the differential path attenuation, solving the constrained dual-wavelength radar equations for parameters of the particle size distribution requires not only this quantity but the single-wavelength path attenuation as well. We investigate a simple method of estimating the single-frequency path attenuation from the differential attenuation and compare this with the estimate derived directly from the surface return.
Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment
NASA Technical Reports Server (NTRS)
Dixon, T. H.; Wolf, S. Kornreich
1990-01-01
Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.
Traffic congestion and reliability : linking solutions to problems.
DOT National Transportation Integrated Search
2004-07-19
The Traffic Congestion and Reliability: Linking Solutions to Problems Report provides : a snapshot of congestion in the United States by summarizing recent trends in : congestion, highlighting the role of unreliable travel times in the effects of con...
Use of (N-1)-D expansions for N-D phase unwrapping in MRI
NASA Astrophysics Data System (ADS)
Bones, Philip J.; King, Laura J.; Millane, Rick P.
2017-09-01
In MRI the presence of metal implants causes severe artifacts in images and interferes with the usual techniques used to separate fat signals from other tissues. In the Dixon method, three images are acquired at different echo times to enable the variation in the magnetic field to be estimated. However, the estimate is represented as the phase of a complex quantity and therefore suffers from wrapping. High field gradients near the metal mean that the phase estimate is undersampled and therefore challenging to unwrap. We have developed POP, phase estimation by onion peeling, an algorithm which unwraps the phase along 1-D paths for a 2-D image obtained with the Dixon method. The unwrapping is initially performed along a closed path enclosing the implant and well separated from it. The recovered phase is expanded using a smooth periodic basis along the path. Then, path-by-path, the estimate is applied to the next path and then the expansion coefficients are estimated to best fit the wrapped measurements. We have successfully tested POP on MRI images of specially constructed phantoms and on a group of patients with hip implants. In principle, POP can be extended to 3-D imaging. In that case, POP would entail representing phase with a suitably smooth basis over a series of surfaces enclosing the implant (the "onion skins"), again beginning the phase estimation well away from the implant. An approach for this is proposed. Results are presented for fat and water separation for 2-D images of phantoms and actual patients. The practicality of the method and its employment in clinical MRI are discussed.
Preventing Bandwidth Abuse at the Router through Sending Rate Estimate-based Active Queue Management
2007-06-01
behavior is growing in the Internet. These non-responsive sources can monopolize network bandwidth and starve the “congestion friendly” flows. Without...unnecessarily complex because most of the flows in the Internet are short flows usually termed as “web mice ” [7]. Moreover, having a separate queue for each
2012 urban congestion trends, operations : the key to reliable travel.
DOT National Transportation Integrated Search
2013-04-01
Congestion levels remained relatively unchanged : from 2011 to 2012 in the 19 urban areas in the United States monitored : in this report. : Congestion levels across all of the congestion measures are still generally : below the levels experienced in...
Anolik, Robert
2009-06-01
Allergic rhinitis (AR) is rapidly increasing in global prevalence. Symptoms of AR, particularly nasal congestion, can cause quality of life (QoL) impairment. Second-generation antihistamines are a recommended first-line therapy for AR but are not viewed as very effective for the treatment of congestion. Therefore, an antihistamine plus a decongestant, such as the combination of desloratadine and pseudoephedrine, is a convenient and efficacious treatment. To review the clinical evidence on the efficacy and safety of combination desloratadine/pseudoephedrine for the treatment of AR symptoms, particularly nasal congestion. Four large studies found that improvement in nasal congestion is enhanced when patients are treated with combination desloratadine/pseudoephedrine. The combination drug significantly improved mean reflective nasal congestion scores in these studies compared with either component as monotherapy (p
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
NASA Astrophysics Data System (ADS)
Gunawan, Fergyanto E.; Abbas, Bahtiar S.; Atmadja, Wiedjaja; Yoseph Chandra, Fajar; Agung, Alexander AS; Kusnandar, Erwin
2014-03-01
Traffic congestion in Asian megacities has become extremely worse, and any means to lessen the congestion level is urgently needed. Building an efficient mass transportation system is clearly necessary. However, implementing Intelligent Transportation Systems (ITS) have also been demonstrated effective in various advanced countries. Recently, the floating vehicle technique (FVT), an ITS implementation, has become cost effective to provide real-time traffic information with proliferation of the smartphones. Although many publications have discussed various issues related to the technique, none of them elaborates the discrepancy of a single floating car data (FCD) and the associated fleet data. This work addresses the issue based on an analysis of Sugiyama et al's experimental data. The results indicate that there is an optimum averaging time interval such that the estimated velocity by the FVT reasonably representing the traffic velocity.
From global scaling to the dynamics of individual cities
NASA Astrophysics Data System (ADS)
Depersin, Jules; Barthelemy, Marc
2018-03-01
Scaling has been proposed as a powerful tool to analyze the properties of complex systems and in particular for cities where it describes how various properties change with population. The empirical study of scaling on a wide range of urban datasets displays apparent nonlinear behaviors whose statistical validity and meaning were recently the focus of many debates. We discuss here another aspect, which is the implication of such scaling forms on individual cities and how they can be used for predicting the behavior of a city when its population changes. We illustrate this discussion in the case of delay due to traffic congestion with a dataset of 101 US cities in the years 1982–2014. We show that the scaling form obtained by agglomerating all of the available data for different cities and for different years does display a nonlinear behavior, but which appears to be unrelated to the dynamics of individual cities when their population grows. In other words, the congestion-induced delay in a given city does not depend on its population only, but also on its previous history. This strong path dependency prohibits the existence of a simple scaling form valid for all cities and shows that we cannot always agglomerate the data for many different systems. More generally, these results also challenge the use of transversal data for understanding longitudinal series for cities.
Zhang, Jisheng; Jia, Limin; Niu, Shuyun; Zhang, Fan; Tong, Lu; Zhou, Xuesong
2015-01-01
It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs) start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs’ route planning for small and medium-scale networks. PMID:26076404
DOT National Transportation Integrated Search
2015-08-01
This document represents the final report of the national evaluation of congestion reduction strategies at six sites that received federal funding under the Urban Partnership Agreement (UPA) and Congestion Reduction Demonstration (CRD) programs. The ...
Mobility and the costs of congestion in New Jersey : 2001 update
DOT National Transportation Integrated Search
2001-07-01
The objective of the Mobility and the Costs of Congestion study is to measure quantifiable and qualitative impacts of roadway congestion in New Jersey. The study addresses the impacts of congestion on an average traveler or affected person, as well a...
Diagnosis and management of nasal congestion: the role of intranasal corticosteroids.
Benninger, Michael
2009-01-01
Nasal congestion is considered the most bothersome of allergic rhinitis (AR) symptoms and can significantly impair ability to function at work, home, and school. Effective management of AR-related nasal congestion depends on accurate diagnosis and appropriate treatment. Many individuals with AR and AR-related congestion remain undiagnosed and do not receive prescription medication. However, new tools intended to improve the diagnosis of nasal congestion have been developed and validated. Intranasal corticosteroids (INSs) are recommended as first-line therapy for patients with moderate-to-severe AR and also when nasal congestion is a prominent symptom. Double blind, randomized clinical trials have demonstrated greater efficacy of INSs versus placebo, antihistamines, or montelukast for relief of all nasal symptoms, especially congestion. Patient adherence to treatment also affects outcomes, and this may be influenced by patient preferences for the sensory attributes of an individual drug. Increased awareness of the effects of AR-related nasal congestion, the efficacy and safety of available pharmacotherapies, and barriers to adherence may improve clinical outcomes.
Signalling and obfuscation for congestion control
NASA Astrophysics Data System (ADS)
Mareček, Jakub; Shorten, Robert; Yu, Jia Yuan
2015-10-01
We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions.
Improving Explicit Congestion Notification with the Mark-Front Strategy
NASA Technical Reports Server (NTRS)
Liu, Chunlei; Jain, Raj
2001-01-01
Delivering congestion signals is essential to the performance of networks. Current TCP/IP networks use packet losses to signal congestion. Packet losses not only reduces TCP performance, but also adds large delay. Explicit Congestion Notification (ECN) delivers a faster indication of congestion and has better performance. However, current ECN implementations mark the packet from the tail of the queue. In this paper, we propose the mark-front strategy to send an even faster congestion signal. We show that mark-front strategy reduces buffer size requirement, improves link efficiency and provides better fairness among users. Simulation results that verify our analysis are also presented.
23 CFR 971.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2011 CFR
2011-04-01
... congestion management strategies; (v) Determine methods to monitor and evaluate the performance of the multi... Federal lands congestion management system (CMS). (a) For purposes of this section, congestion means the... for the transportation systems providing access to and within National Forests, as appropriate, that...
23 CFR 970.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Federal lands congestion management system (CMS). 970... LANDS HIGHWAYS NATIONAL PARK SERVICE MANAGEMENT SYSTEMS National Park Service Management Systems § 970.214 Federal lands congestion management system (CMS). (a) For purposes of this section, congestion...
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Quantifying tight-gas sandstone permeability via critical path analysis
USDA-ARS?s Scientific Manuscript database
Rock permeability has been actively investigated over the past several decades by the geosciences community. However, its accurate estimation still presents significant technical challenges, especially in spatially complex rocks. In this letter, we apply critical path analysis (CPA) to estimate perm...
23 CFR 450.320 - Congestion management process in transportation management areas.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Congestion management process in transportation... Programming § 450.320 Congestion management process in transportation management areas. (a) The transportation planning process in a TMA shall address congestion management through a process that provides for safe and...
23 CFR 450.320 - Congestion management process in transportation management areas.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Congestion management process in transportation... Programming § 450.320 Congestion management process in transportation management areas. (a) The transportation planning process in a TMA shall address congestion management through a process that provides for safe and...
23 CFR 450.320 - Congestion management process in transportation management areas.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 23 Highways 1 2013-04-01 2013-04-01 false Congestion management process in transportation... Programming § 450.320 Congestion management process in transportation management areas. (a) The transportation planning process in a TMA shall address congestion management through a process that provides for safe and...
76 FR 75875 - Plan for Conduct of 2012 Electric Transmission Congestion Study
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-05
... DEPARTMENT OF ENERGY Plan for Conduct of 2012 Electric Transmission Congestion Study AGENCY... preparation of a study of electric transmission congestion pursuant to section 216(a)(1) of the Federal Power...-register at this Web site: http://energy.gov/oe/congestion-study-2012 . [[Page 75876
Age estimation of the Deccan Traps from the North American apparent polar wander path
NASA Technical Reports Server (NTRS)
Stoddard, Paul R.; Jurdy, Donna M.
1988-01-01
It has recently been proposed that flood basalt events, such as the eruption of the Deccan Traps, have been responsible for mass extinctions. To test this hypothesis, accurate estimations of the ages and duration of these events are needed. In the case of the Deccan Traps, however, neither age nor duration of emplacement is well constrianed; measured ages range from 40 to more than 80 Myr, and estimates of duration range from less than 1 to 67 Myr. To make an independent age determination, paleomagnetic and sea-floor-spreading data are used, and the associated errors are estimated. The Deccan paleomagnetic pole is compared with the reference apparent polar wander path of North America by rotating the positions of the paleomagnetic pole for the Deccan Traps to the reference path for a range of assumed ages. Uncertainties in the apparent polar wander path, Deccan paleopole position, and errors resulting from the plate reconstruction are estimated. It is suggested that 83-70 Myr is the most likely time of extrusion of these volcanic rocks.
Current state of traffic pollution in Bangladesh and metropolitan Dhaka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Masud; Matsui, Hiroshi; Ohno, Takashi
1997-12-31
Limited resources, invested for the development of transport facilities, such as infrastructure and vehicles, coupled with the rapid rise in transport demand, existence of a huge number of non-motorized vehicles on roads, lack of application of adequate and proper traffic management schemes are producing severe transport problems in almost all the urban areas of Bangladesh. Worsening situation of traffic congestion in the streets and sufferings of the inhabitants from vehicle emissions demand extensive research in this field. However, no detailed study concerning traffic congestion and pollution problems for urban areas of Bangladesh has yet been done. Therefore, it has becomemore » increasingly important to examine the present state of the problem. This research is a preliminary evaluation of the current situation of traffic pollution problem in Bangladesh. The daily total emissions of NO{sub x}, HC, CO, PM, and SO{sub x} are estimated using the daily fuel consumption and total traffic flows in Dhaka city. Estimated daily emissions are 42, 39, 314, 14, and 42 t/d for NO{sub x}, HC, CO, PM, and SO{sub x}, respectively. The emissions estimated using two different methods revealed good correlation. Daily average concentration of NO{sub x} (NO{sub 2}, NO) were measured at 30 street locations in Dhaka city during September and November, 1996. The results showed extremely high concentrations of NO{sub 2} and NO in these locations.« less
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek
2016-04-01
Air pollution is one of the major concerns for human health. Associations between air pollution and health are often calculated using long-term (i.e. years to decades) information on personal exposure for each individual in a cohort. Personal exposure is the air pollution aggregated along the space-time path visited by an individual. As air pollution may vary considerably in space and time, for instance due to motorised traffic, the estimation of the spatio-temporal location of a persons' space-time path is important to identify the personal exposure. However, long term exposure is mostly calculated using the air pollution concentration at the x, y location of someone's home which does not consider that individuals are mobile (commuting, recreation, relocation). This assumption is often made as it is a major challenge to estimate space-time paths for all individuals in large cohorts, mostly because limited information on mobility of individuals is available. We address this issue by evaluating multiple approaches for the calculation of space-time paths, thereby estimating the personal exposure along these space-time paths with hyper resolution air pollution maps at national scale. This allows us to evaluate the effect of the space-time path and resulting personal exposure. Air pollution (e.g. NO2, PM10) was mapped for the entire Netherlands at a resolution of 5×5 m2 using the land use regression models developed in the European Study of Cohorts for Air Pollution Effects (ESCAPE, http://escapeproject.eu/) and the open source software PCRaster (http://www.pcraster.eu). The models use predictor variables like population density, land use, and traffic related data sets, and are able to model spatial variation and within-city variability of annual average concentration values. We approximated space-time paths for all individuals in a cohort using various aggregations, including those representing space-time paths as the outline of a persons' home or associated parcel of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization
Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.
Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.
Flow assignment model for quantitative analysis of diverting bulk freight from road to railway
Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian
2017-01-01
Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536
Christodoulou, Manolis A; Kontogeorgou, Chrysa
2008-10-01
In recent years there has been a great effort to convert the existing Air Traffic Control system into a novel system known as Free Flight. Free Flight is based on the concept that increasing international airspace capacity will grant more freedom to individual pilots during the enroute flight phase, thereby giving them the opportunity to alter flight paths in real time. Under the current system, pilots must request, then receive permission from air traffic controllers to alter flight paths. Understandably the new system allows pilots to gain the upper hand in air traffic. At the same time, however, this freedom increase pilot responsibility. Pilots face a new challenge in avoiding the traffic shares congested air space. In order to ensure safety, an accurate system, able to predict and prevent conflict among aircraft is essential. There are certain flight maneuvers that exist in order to prevent flight disturbances or collision and these are graded in the following categories: vertical, lateral and airspeed. This work focuses on airspeed maneuvers and tries to introduce a new idea for the control of Free Flight, in three dimensions, using neural networks trained with examples prepared through non-linear programming.
A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks
Costa, Daniel G.; Guedes, Luiz Affonso
2011-01-01
Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang
2013-01-01
As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.
Mixed Transportation Network Design under a Sustainable Development Perspective
Qin, Jin; Ni, Ling-lin; Shi, Feng
2013-01-01
A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%. PMID:23476142
Mixed transportation network design under a sustainable development perspective.
Qin, Jin; Ni, Ling-lin; Shi, Feng
2013-01-01
A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%.
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Autonomous Congestion Control in Delay-Tolerant Networks
NASA Technical Reports Server (NTRS)
Burleigh, Scott; Jennings, Esther; Schoolcraft, Joshua
2006-01-01
Congestion control is an important feature that directly affects network performance. Network congestion may cause loss of data or long delays. Although this problem has been studied extensively in the Internet, the solutions for Internet congestion control do not apply readily to challenged network environments such as Delay Tolerant Networks (DTN) where end-to-end connectivity may not exist continuously and latency can be high. In DTN, end-to-end rate control is not feasible. This calls for congestion control mechanisms where the decisions can be made autonomously with local information only. We use an economic pricing model and propose a rule-based congestion control mechanism where each router can autonomously decide on whether to accept a bundle (data) based on local information such as available storage and the value and risk of accepting the bundle (derived from historical statistics). Preliminary experimental results show that this congestion control mechanism can protect routers from resource depletion without loss of data.
A research of the community’s opening to the outside world
NASA Astrophysics Data System (ADS)
Xu, Lan; Liu, Xiangzhuo
2017-03-01
Closed residential areas, called community, the traffic network and result in various degrees of traffic congestion such as amputating, dead ends and T-shaped roads. In order to reveal the mechanism of the congestion, establish an effective evaluation index system and finally provide theoretical basis for the study of traffic congestion, we have done researches on factors for traffic congestion and have established a scientific evaluation index system combining experiences home and abroad, based on domestic congestion status. Firstly, we analyse the traffic network as the entry point, and then establish the evaluation model of road capacity with the method of AHP index system. Secondly, we divide the condition of urban congestion into 5 levels from congestion to smoothness. Besides, with VISSIM software, simulations about traffic capacity before and after community opening are carried out. Finally, we provide forward reasonable suggestions upon the combination of models and reality.
... congestion; Lung water; Pulmonary congestion; Heart failure - pulmonary edema ... Pulmonary edema is often caused by congestive heart failure . When the heart is not able to pump efficiently, blood ...
Tang, Junqing; Heinimann, Hans Rudolf
2018-01-01
Traffic congestion brings not only delay and inconvenience, but other associated national concerns, such as greenhouse gases, air pollutants, road safety issues and risks. Identification, measurement, tracking, and control of urban recurrent congestion are vital for building a livable and smart community. A considerable amount of works has made contributions to tackle the problem. Several methods, such as time-based approaches and level of service, can be effective for characterizing congestion on urban streets. However, studies with systemic perspectives have been minor in congestion quantification. Resilience, on the other hand, is an emerging concept that focuses on comprehensive systemic performance and characterizes the ability of a system to cope with disturbance and to recover its functionality. In this paper, we symbolized recurrent congestion as internal disturbance and proposed a modified metric inspired by the well-applied "R4" resilience-triangle framework. We constructed the metric with generic dimensions from both resilience engineering and transport science to quantify recurrent congestion based on spatial-temporal traffic patterns and made the comparison with other two approaches in freeway and signal-controlled arterial cases. Results showed that the metric can effectively capture congestion patterns in the study area and provides a quantitative benchmark for comparison. Also, it suggested not only a good comparative performance in measuring strength of proposed metric, but also its capability of considering the discharging process in congestion. The sensitivity tests showed that proposed metric possesses robustness against parameter perturbation in Robustness Range (RR), but the number of identified congestion patterns can be influenced by the existence of ϵ. In addition, the Elasticity Threshold (ET) and the spatial dimension of cell-based platform differ the congestion results significantly on both the detected number and intensity. By tackling this conventional problem with emerging concept, our metric provides a systemic alternative approach and enriches the toolbox for congestion assessment. Future work will be conducted on a larger scale with multiplex scenarios in various traffic conditions.
Analysis of random drop for gateway congestion control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Hashem, Emam Salaheddin
1989-01-01
Lately, the growing demand on the Internet has prompted the need for more effective congestion control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to unfair service to the individual users and a degradation of overall network performance. Network simulation was used to illustrate the character of Internet congestion and its causes. A newly proposed gateway congestion control policy, called Random Drop, was considered as a promising solution to the pressing problem. Random Drop relieves resource congestion upon buffer overflow by choosing a random packet from the service queue to be dropped. The random choice should result in a drop distribution proportional to the bandwidth distribution among all contending TCP connections, thus applying the necessary fairness. Nonetheless, the simulation experiments demonstrate several shortcomings with this policy. Because Random Drop is a congestion control policy, which is not applied until congestion has already occurred, it usually results in a high drop rate that hurts too many connections including well-behaved ones. Even though the number of packets dropped is different from one connection to another depending on the buffer utilization upon overflow, the TCP recovery overhead is high enough to neutralize these differences, causing unfair congestion penalties. Besides, the drop distribution itself is an inaccurate representation of the average bandwidth distribution, missing much important information about the bandwidth utilization between buffer overflow events. A modification of Random Drop to do congestion avoidance by applying the policy early was also proposed. Early Random Drop has the advantage of avoiding the high drop rate of buffer overflow. The early application of the policy removes the pressure of congestion relief and allows more accurate signaling of congestion. To be used effectively, algorithms for the dynamic adjustment of the parameters of Early Random Drop to suite the current network load must still be developed.
NASA Technical Reports Server (NTRS)
Crane, R. K.; Blood, D. W.
1979-01-01
A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.
Making the Traffic Operations Case for Congestion Pricing: Operational Impacts of Congestion Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Hu, Patricia S; Davidson, Diane
2011-02-01
Congestion begins when an excess of vehicles on a segment of roadway at a given time, resulting in speeds that are significantly slower than normal or 'free flow' speeds. Congestion often means stop-and-go traffic. The transition occurs when vehicle density (the number of vehicles per mile in a lane) exceeds a critical level. Once traffic enters a state of congestion, recovery or time to return to a free-flow state is lengthy; and during the recovery process, delay continues to accumulate. The breakdown in speed and flow greatly impedes the efficient operation of the freeway system, resulting in economic, mobility, environmentalmore » and safety problems. Freeways are designed to function as access-controlled highways characterized by uninterrupted traffic flow so references to freeway performance relate primarily to the quality of traffic flow or traffic conditions as experienced by users of the freeway. The maximum flow or capacity of a freeway segment is reached while traffic is moving freely. As a result, freeways are most productive when they carry capacity flows at 60 mph, whereas lower speeds impose freeway delay, resulting in bottlenecks. Bottlenecks may be caused by physical disruptions, such as a reduced number of lanes, a change in grade, or an on-ramp with a short merge lane. This type of bottleneck occurs on a predictable or 'recurrent' basis at the same time of day and same day of week. Recurrent congestion totals 45% of congestion and is primarily from bottlenecks (40%) as well as inadequate signal timing (5%). Nonrecurring bottlenecks result from crashes, work zone disruptions, adverse weather conditions, and special events that create surges in demand and that account for over 55% of experienced congestion. Figure 1.1 shows that nonrecurring congestion is composed of traffic incidents (25%), severe weather (15%), work zones, (10%), and special events (5%). Between 1995 and 2005, the average percentage change in increased peak traveler delay, based on hours spent in traffic in a year, grew by 22% as the national average of hours spent in delay grew from 36 hours to 44 hours. Peak delay per traveler grew one-third in medium-size urban areas over the 10 year period. The traffic engineering community has developed an arsenal of integrated tools to mitigate the impacts of congestion on freeway throughput and performance, including pricing of capacity to manage demand for travel. Congestion pricing is a strategy which dynamically matches demand with available capacity. A congestion price is a user fee equal to the added cost imposed on other travelers as a result of the last traveler's entry into the highway network. The concept is based on the idea that motorists should pay for the additional congestion they create when entering a congested road. The concept calls for fees to vary according to the level of congestion with the price mechanism applied to make travelers more fully aware of the congestion externality they impose on other travelers and the system itself. The operational rationales for the institution of pricing strategies are to improve the efficiency of operations in a corridor and/or to better manage congestion. To this end, the objectives of this project were to: (1) Better understand and quantify the impacts of congestion pricing strategies on traffic operations through the study of actual projects, and (2) Better understand and quantify the impacts of congestion pricing strategies on traffic operations through the use of modeling and other analytical methods. Specifically, the project was to identify credible analytical procedures that FHWA can use to quantify the impacts of various congestion pricing strategies on traffic flow (throughput) and congestion.« less
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
The effects of congestions tax on air quality and health
NASA Astrophysics Data System (ADS)
Johansson, Christer; Burman, Lars; Forsberg, Bertil
The "Stockholm Trial" involved a road pricing system to improve the air quality and reduce traffic congestion. The test period of the trial was January 3-July 31, 2006. Vehicles travelling into and out of the charge cordon were charged for every passage during weekdays. The amount due varied during the day and was highest during rush hours (20 SEK = 2.2 EUR, maximum 60 SEK per day). Based on measured and modelled changes in road traffic it was estimated that this system resulted in a 15% reduction in total road use within the charged cordon. Total traffic emissions in this area of NO x and PM10 fell by 8.5% and 13%, respectively. Air quality dispersion modelling was applied to assess the effect of the emission reductions on ambient concentrations and population exposure. For the situations with and without the trial, meteorological conditions and other emissions than from road traffic were kept the same. The calculations show that, with a permanent congestion tax system like the Stockholm Trial, the annual average NO x concentrations would be lower by up to 12% along the most densely trafficked streets. PM10 concentrations would be up to 7% lower. The limit values for both PM10 and NO 2 would still be exceeded along the most densely trafficked streets. The total population exposure of NO x in Greater Stockholm (35 × 35 km with 1.44 million people) is estimated to decrease with a rather modest 0.23 μg m -3. However, based on a long-term epidemiological study, that found an increased mortality risk of 8% per 10 μg m -3 NO x, it is estimated that 27 premature deaths would be avoided every year. According to life-table analysis this would correspond to 206 years of life gained over 10 years per 100 000 people following the trial if the effects on exposures would persist. The effect on mortality is attributed to road traffic emissions (likely vehicle exhaust particles); NO x is merely regarded as an indicator of traffic exposure. This is only the tip of the ice-berg since reductions are expected in both respiratory and cardiovascular morbidity. This study demonstrates the importance of not only assessing the effects on air quality limit values, but also to make quantitative estimates of health impacts, in order to justify actions to reduce air pollution.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path
Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki
2017-01-01
Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622
Rate of change of heart size before congestive heart failure in dogs with mitral regurgitation.
Lord, P; Hansson, K; Kvart, C; Häggström, J
2010-04-01
The objective of the study was to examine the changes in vertebral heart scale, and left atrial and ventricular dimensions before and at onset of congestive heart failure in cavalier King Charles spaniels with mitral regurgitation. Records and radiographs from 24 cavalier King Charles spaniels with mitral regurgitation were used. Vertebral heart scale (24 dogs), and left atrial dimension and left ventricular end diastolic and end systolic diameters (18 dogs) and their rate of increase were measured at intervals over years to the onset of congestive heart failure. They were plotted against time to onset of congestive heart failure. Dimensions and rates of change of all parameters were highest at onset of congestive heart failure, the difference between observed and chance outcome being highly significant using a two-tailed chi-square test (P<0.001). The left heart chambers increase in size rapidly only in the last year before the onset of congestive heart failure. Increasing left ventricular end systolic dimension is suggestive of myocardial failure before the onset of congestive heart failure. Rate of increase of heart dimensions may be a useful indicator of impending congestive heart failure.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
New Schemes for Predictive Congestion Control
1993-04-01
1 and that node j is the downstream neighbor of node i. With these changes, (3.1) becomes 2Ä,-,ou«(*) = 2Ri,^t(t - 1) - abias ;, (3.2) where biasi...estimate the value of Rjt<mt(t + 1): 2Rjtout(t + 1) = 2Rjt<mt(t) - abia ^-, (3.3) 18 where bias,- = (2RjtOUi(t)) lf2 — Bj(t). Again, the" notation
Dynamic Flow Management Problems in Air Transportation
NASA Technical Reports Server (NTRS)
Patterson, Sarah Stock
1997-01-01
In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.
NASA Astrophysics Data System (ADS)
Weber, R. J.; Bates, J.; Abrams, J.; Verma, V.; Fang, T.; Klein, M.; Strickland, M. J.; Sarnat, S. E.; Chang, H. H.; Mulholland, J. A.; Tolbert, P. E.; Russell, A. G.
2015-12-01
It is hypothesized that fine particulate matter (PM2.5) inhalation can catalytically generate reactive oxygen species (ROS) in excess of the body's antioxidant capacity, leading to oxidative stress and ultimately adverse health. PM2.5 emissions from different sources vary widely in chemical composition, with varied effects on the body. Here, the ability of mixtures of different sources of PM2.5 to generate ROS and associations of this capability with acute health effects were investigated. A dithiothreitol (DTT) assay that integrates over different sources was used to quantify ROS generation potential of ambient water-soluble PM2.5 in Atlanta from June 2012 - June 2013. PM2.5 source impacts, estimated using the Chemical Mass Balance method with ensemble-averaged source impact profiles, were related to DTT activity using a linear regression model, which provided information on intrinsic DTT activity (i.e., toxicity) of each source. The model was then used to develop a time series of daily DTT activity over a ten-year period (1998-2010) for use in an epidemiologic study. Light-duty gasoline vehicles exhibited the highest intrinsic DTT activity, followed by biomass burning and heavy-duty diesel vehicles. Biomass burning contributed the largest fraction to total DTT activity, followed by gasoline and diesel vehicles (45%, 20% and 14%, respectively). These results suggest the importance of aged oxygenated organic aerosols and metals in ROS generation. Epidemiologic analyses found significant associations between estimated DTT activity and emergency department visits for congestive heart failure and asthma/wheezing attacks in the 5-county Atlanta area. Estimated DTT activity was the only pollutant measure out of PM2.5, O3, and PM2.5 constituents elemental carbon and organic carbon) that exhibited a significant link to congestive heart failure. In two-pollutant models, DTT activity was significantly associated with asthma/wheeze and congestive heart failure while PM2.5 was not, which supports the hypothesis that PM2.5 health effects are, in part, due to oxidative stress and that DTT activity may be a better indicator of some aerosol-related health effects than PM2.5 mass.
The merits of a robot: a Dutch experience.
Mobach, Mark P
2006-01-01
To determine the merits of a robot at the community pharmacy in a quasi-experiment. The applied methods for data-collection were barcode-time measurements, direct observations, time-interval studies, and tally at a Dutch community pharmacy. The topics consisted of workload, waiting times, congestion, slack, general work, counter work, and work at the consultation room. The topics were studied in pre-test and post-test stages, each stage during six weeks. By using these topics and some additional data from the pharmacy, the economics of the robot were also assessed. The workload decreased with 15 prescriptions per person per day. The waiting times decreased with one minute and 18 seconds per dispensing process, reducing the wait until counter contact. The day congestion decreased with one hour 27 minutes and 36 seconds, and the day slack increased with 28 minutes. The analysis of the general work showed no appreciable difference in the bulk of the care-related activities and the other activities. However, some work was re-shuffled: 7% increase at counter work and 7% decrease at logistics. Moreover, statistically significant increases were observed at counter work (5%) and robot work (4%), and significant decreases at telephone (3%) and filling work in presence of the patient (4%). The counter tally study showed a rise in care-related activities with 8%. Moreover, it also illuminated a statistically significant decrease at no information (11%) and an increase at only social (2%). The consultation room was never used during the study. The pharmacy economics of the robot showed that the robot had high estimated costs for purchase, depreciation, and maintenance: EUR 187,024 in the first year. Moreover, the robot had positive impact on waiting times, congestion, staffing, logistics, and care-related work, which was estimated on EUR 91,198 in the first year. The estimated payback time of the robot was three years. An introduction of the robot may indeed have the often supposed positive effects on pharmaceutical care. Even though the costs are high and the technical problems are present, the robot seems to be financial beneficial after three years. The robot can create space for pharmaceutical care, but it has a substantial cost.
Path integration in tactile perception of shapes.
Moscatelli, Alessandro; Naceri, Abdeldjallil; Ernst, Marc O
2014-11-01
Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect. Copyright © 2014 Elsevier B.V. All rights reserved.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Trajectory specification for high capacity air traffic control
NASA Technical Reports Server (NTRS)
Paielli, Russell A. (Inventor)
2010-01-01
Method and system for analyzing and processing information on one or more aircraft flight paths, using a four-dimensional coordinate system including three Cartesian or equivalent coordinates (x, y, z) and a fourth coordinate .delta. that corresponds to a distance estimated along a reference flight path to a nearest reference path location corresponding to a present location of the aircraft. Use of the coordinate .delta., rather than elapsed time t, avoids coupling of along-track error into aircraft altitude and reduces effects of errors on an aircraft landing site. Along-track, cross-track and/or altitude errors are estimated and compared with a permitted error bounding space surrounding the reference flight path.
Intelligent traffic lights based on MATLAB
NASA Astrophysics Data System (ADS)
Nie, Ying
2018-04-01
In this paper, I describes the traffic lights system and it has some. Through analysis, I used MATLAB technology, transformed the camera photographs into digital signals. Than divided the road vehicle is into three methods: very congestion, congestion, a little congestion. Through the MCU programming, solved the different roads have different delay time, and Used this method, saving time and resources, so as to reduce road congestion.
The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
A new casemix adjustment index for hospital mortality among patients with congestive heart failure.
Polanczyk, C A; Rohde, L E; Philbin, E A; Di Salvo, T G
1998-10-01
Comparative analysis of hospital outcomes requires reliable adjustment for casemix. Although congestive heart failure is one of the most common indications for hospitalization, congestive heart failure casemix adjustment has not been widely studied. The purposes of this study were (1) to describe and validate a new congestive heart failure-specific casemix adjustment index to predict in-hospital mortality and (2) to compare its performance to the Charlson comorbidity index. Data from all 4,608 admissions to the Massachusetts General Hospital from January 1990 to July 1996 with a principal ICD-9-CM discharge diagnosis of congestive heart failure were evaluated. Massachusetts General Hospital patients were randomly divided in a derivation and a validation set. By logistic regression, odds ratios for in-hospital death were computed and weights were assigned to construct a new predictive index in the derivation set. The performance of the index was tested in an internal Massachusetts General Hospital validation set and in a non-Massachusetts General Hospital external validation set incorporating data from all 1995 New York state hospital discharges with a primary discharge diagnosis of congestive heart failure. Overall in-hospital mortality was 6.4%. Based on the new index, patients were assigned to six categories with incrementally increasing hospital mortality rates ranging from 0.5% to 31%. By logistic regression, "c" statistics of the congestive heart failure-specific index (0.83 and 0.78, derivation and validation set) were significantly superior to the Charlson index (0.66). Similar incrementally increasing hospital mortality rates were observed in the New York database with the congestive heart failure-specific index ("c" statistics 0.75). In an administrative database, this congestive heart failure-specific index may be a more adequate casemix adjustment tool to predict hospital mortality in patients hospitalized for congestive heart failure.
Remote Monitoring in Heart Failure: the Current State.
Mohan, Rajeev C; Heywood, J Thomas; Small, Roy S
2017-03-01
The treatment of congestive heart failure is an expensive undertaking with much of this cost occurring as a result of hospitalization. It is not surprising that many remote monitoring strategies have been developed to help patients maintain clinical stability by avoiding congestion. Most of these have failed. It seems very unlikely that these failures were the result of any one underlying false assumption but rather from the fact that heart failure is a progressive, deadly disease and that human behavior is hard to modify. One lesson that does stand out from the myriad of methods to detect congestion is that surrogates of congestion, such as weight and impedance, are not reliable or actionable enough to influence outcomes. Too many factors influence these surrogates to successfully and confidently use them to affect HF hospitalization. Surrogates are often attractive because they can be inexpensively measured and followed. They are, however, indirect estimations of congestion, and due to the lack specificity, the time and expense expended affecting the surrogate do not provide enough benefit to warrant its use. We know that high filling pressures cause transudation of fluid into tissues and that pulmonary edema and peripheral edema drive patients to seek medical assistance. Direct measurement of these filling pressures appears to be the sole remote monitoring modality that shows a benefit in altering the course of the disease in these patients. Congestive heart failure is such a serious problem and the consequences of hospitalization so onerous in terms of patient well-being and costs to society that actual hemodynamic monitoring, despite its costs, is beneficial in carefully selected high-risk patients. Those patients who benefit are ones with a prior hospitalization and ongoing New York Heart Association (NYHA) class III symptoms. Patients with NYHA class I and II symptoms do not require hemodynamic monitoring because they largely have normal hemodynamics. Those with NYHA class IV symptoms do not benefit because their hemodynamics are so deranged that they cannot be substantially altered except by mechanical circulatory support or heart transplantation. Finally, hemodynamic monitoring offers substantial hope to those patients with normal ejection fraction (EF) heart failure, a large group for whom medical therapy has largely been a failure. These patients have not benefited from the neurohormonal revolution that improved the lives of their brothers and sisters with reduced ejection fractions. Hemodynamic stabilization improves the condition of both but more so of the normal EF cohort. This is an important observation that will help us design future trials for the 50% of heart failure patients with normal systolic function.
Rhode Island congestion management plan : executive summary
DOT National Transportation Integrated Search
1997-09-01
This document provides an overview of the Rhode Island Congestion Management System (CMS) program consisting of the following: Congestion Management System Plan; Incident Management Plan; and Intelligent Transportation System (ITS) Early Deployment P...
Technologies that enable congestion pricing : a primer
DOT National Transportation Integrated Search
2008-10-01
This volume explores transportation technologies that enable congestion pricing. This document considers the following: the functional processes for tolling and congestion pricing; what technologies there are to consider; how the technologies are app...
Better road congestion measures are needed
DOT National Transportation Integrated Search
2003-05-01
Road congestion is growing worse as demand : outstrips new roadway construction and other : efforts to increase traffic flows. : Better ways to measure congestion are needed to : effectively address the problem. : Actual measures of speeds an...
Income-based equity impacts of congestion pricing
DOT National Transportation Integrated Search
2008-12-01
This equity primer was produced to examine the impacts of congestion pricing on low-income groups, public opinion as expressed by various income groups, and ways to mitigate the equity impacts of congestion pricing
Income-based equity impacts of congestion pricing.
DOT National Transportation Integrated Search
2008-12-01
This equity primer was produced to examine the impacts of congestion pricing on low-income groups, public opinion as expressed by various income groups, and ways to mitigate the equity impacts of congestion pricing.
Autonomous Congestion Control in Delay-Tolerant Networks
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.; Jennings, Esther H.
2005-01-01
Congestion control is an important feature that directly affects network performance. Network congestion may cause loss of data or long delays. Although this problem has been studied extensively in the Internet, the solutions for Internet congestion control do not apply readily to challenged network environments such as Delay Tolerant Networks (DTN) where end-to-end connectivity may not exist continuously and latency can be high. In DTN, end-to-end rate control is not feasible. This calls for congestion control mechanisms where the decisions can be made autonomously with local information only. We use an economic pricing model and propose a rule-based congestion control mechanism where each router can autonomously decide on whether to accept a bundle (data) based on local information such as available storage and the value and risk of accepting the bundle (derived from historical statistics).
Ohno, Yukako; Hanawa, Haruo; Jiao, Shuang; Hayashi, Yuka; Yoshida, Kaori; Suzuki, Tomoyasu; Kashimura, Takeshi; Obata, Hiroaki; Tanaka, Komei; Watanabe, Tohru; Minamino, Tohru
2015-01-01
Hepcidin is a key regulator of mammalian iron metabolism and mainly produced by the liver. Hepcidin excess causes iron deficiency and anemia by inhibiting iron absorption from the intestine and iron release from macrophage stores. Anemia is frequently complicated with heart failure. In heart failure patients, the most frequent histologic appearance of liver is congestion. However, it remains unclear whether liver congestion associated with heart failure influences hepcidin production, thereby contributing to anemia and functional iron deficiency. In this study, we investigated this relationship in clinical and basic studies. In clinical studies of consecutive heart failure patients (n = 320), anemia was a common comorbidity (41%). In heart failure patients without active infection and ongoing cancer (n = 30), log-serum hepcidin concentration of patients with liver congestion was higher than those without liver congestion (p = 0.0316). Moreover, in heart failure patients with liver congestion (n = 19), the anemia was associated with the higher serum hepcidin concentrations, which is a type of anemia characterized by induction of hepcidin. Subsequently, we produced a rat model of heart failure with liver congestion by injecting monocrotaline that causes pulmonary hypertension. The monocrotaline-treated rats displayed liver congestion with increase of hepcidin expression at 4 weeks after monocrotaline injection, followed by anemia and functional iron deficiency observed at 5 weeks. We conclude that liver congestion induces hepcidin production, which may result in anemia and functional iron deficiency in some patients with heart failure.
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
A Path Algorithm for Constrained Estimation
Zhou, Hua; Lange, Kenneth
2013-01-01
Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382
NASA Astrophysics Data System (ADS)
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water
Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong
2017-01-01
This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442
K. Novick; J. Walker; W.S. Chan; A. Schmidt; C. Sobek; J.M. Vose
2013-01-01
A new class of enclosed path gas analyzers suitable for eddy covariance applications combines the advantages of traditional closed-path systems (small density corrections, good performance in poor weather) and open-path systems (good spectral response, low power requirements), and permits estimates of instantaneous gas mixing ratio. Here, the extent to which these...
Communication efficiency and congestion of signal traffic in large-scale brain networks.
Mišić, Bratislav; Sporns, Olaf; McIntosh, Anthony R
2014-01-01
The complex connectivity of the cerebral cortex suggests that inter-regional communication is a primary function. Using computational modeling, we show that anatomical connectivity may be a major determinant for global information flow in brain networks. A macaque brain network was implemented as a communication network in which signal units flowed between grey matter nodes along white matter paths. Compared to degree-matched surrogate networks, information flow on the macaque brain network was characterized by higher loss rates, faster transit times and lower throughput, suggesting that neural connectivity may be optimized for speed rather than fidelity. Much of global communication was mediated by a "rich club" of hub regions: a sub-graph comprised of high-degree nodes that are more densely interconnected with each other than predicted by chance. First, macaque communication patterns most closely resembled those observed for a synthetic rich club network, but were less similar to those seen in a synthetic small world network, suggesting that the former is a more fundamental feature of brain network topology. Second, rich club regions attracted the most signal traffic and likewise, connections between rich club regions carried more traffic than connections between non-rich club regions. Third, a number of rich club regions were significantly under-congested, suggesting that macaque connectivity actively shapes information flow, funneling traffic towards some nodes and away from others. Together, our results indicate a critical role of the rich club of hub nodes in dynamic aspects of global brain communication.
Communication Efficiency and Congestion of Signal Traffic in Large-Scale Brain Networks
Mišić, Bratislav; Sporns, Olaf; McIntosh, Anthony R.
2014-01-01
The complex connectivity of the cerebral cortex suggests that inter-regional communication is a primary function. Using computational modeling, we show that anatomical connectivity may be a major determinant for global information flow in brain networks. A macaque brain network was implemented as a communication network in which signal units flowed between grey matter nodes along white matter paths. Compared to degree-matched surrogate networks, information flow on the macaque brain network was characterized by higher loss rates, faster transit times and lower throughput, suggesting that neural connectivity may be optimized for speed rather than fidelity. Much of global communication was mediated by a “rich club” of hub regions: a sub-graph comprised of high-degree nodes that are more densely interconnected with each other than predicted by chance. First, macaque communication patterns most closely resembled those observed for a synthetic rich club network, but were less similar to those seen in a synthetic small world network, suggesting that the former is a more fundamental feature of brain network topology. Second, rich club regions attracted the most signal traffic and likewise, connections between rich club regions carried more traffic than connections between non-rich club regions. Third, a number of rich club regions were significantly under-congested, suggesting that macaque connectivity actively shapes information flow, funneling traffic towards some nodes and away from others. Together, our results indicate a critical role of the rich club of hub nodes in dynamic aspects of global brain communication. PMID:24415931
Evaluation of Swift Start TCP in Long-Delay Environment
NASA Technical Reports Server (NTRS)
Lawas-Grodek, Frances J.; Tran, Diepchi T.
2004-01-01
This report presents the test results of the Swift Start algorithm in single-flow and multiple-flow testbeds under the effects of high propagation delays, various slow bottlenecks, and small queue sizes. Although this algorithm estimates capacity and implements packet pacing, the findings were that in a heavily congested link, the Swift Start algorithm will not be applicable. The reason is that the bottleneck estimation is falsely influenced by timeouts induced by retransmissions and the expiration of delayed acknowledgment (ACK) timers, thus causing the modified Swift Start code to fall back to regular transmission control protocol (TCP).
Evaluating the risk of industrial espionage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bott, T.F.
1998-12-31
A methodology for estimating the relative probabilities of different compromise paths for protected information by insider and visitor intelligence collectors has been developed based on an event-tree analysis of the intelligence collection operation. The analyst identifies target information and ultimate users who might attempt to gain that information. The analyst then uses an event tree to develop a set of compromise paths. Probability models are developed for each of the compromise paths that user parameters based on expert judgment or historical data on security violations. The resulting probability estimates indicate the relative likelihood of different compromise paths and provide anmore » input for security resource allocation. Application of the methodology is demonstrated using a national security example. A set of compromise paths and probability models specifically addressing this example espionage problem are developed. The probability models for hard-copy information compromise paths are quantified as an illustration of the results using parametric values representative of historical data available in secure facilities, supplemented where necessary by expert judgment.« less
Alternative performance measures for evaluating congestion.
DOT National Transportation Integrated Search
2004-04-01
This report summarizes the results of the work performed under the project Alternative Performance Measures for Evaluating : Congestion. The study first outlines existing approaches to looking at congestion. It then builds on the previous work in the...
Technologies that complement congestion pricing : a primer.
DOT National Transportation Integrated Search
2008-10-01
The purpose of this volume is to consider the technology options that are available to complement congestion-pricing approaches. This primer explores how technology broadens the success for congestion pricing by supporting the traveler's decision to ...
Multimodal and corridor applications of travel time reliability.
DOT National Transportation Integrated Search
2012-03-30
Congestion is all too familiar in Floridas cities. Traditionally, agencies have tried to mitigate recurring congestion by comparing demand and capacity during peak periods and alleviating bottlenecks. However, congestion is often due to nonrecurri...
Multimodal and corridor applications of travel time reliability : [summary].
DOT National Transportation Integrated Search
2012-01-01
Congestion is all too familiar in Floridas cities. Traditionally, agencies have tried to mitigate recurring congestion by comparing demand and capacity during peak periods and alleviating bottlenecks. However, congestion is often due to nonrecurri...
Statewide GIS mapping of recurring congestion corridors : final report.
DOT National Transportation Integrated Search
2009-07-01
Recurring congestion occurs when travel demand reaches or exceeds the available roadway : capacity. This project developed an interactive geographic information system (GIS) map of the : recurring congestion corridors (labeled herein as hotspots) in ...
Congestion pricing : a primer : overview
DOT National Transportation Integrated Search
2008-10-01
This Overview primer was produced to explain the concept of congestion pricing and its benefits, to present examples of congestion-pricing approaches implemented in the United States and abroad, and to briefly discuss federal-aid policy and programs ...
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
An improved multi-paths optimization method for video stabilization
NASA Astrophysics Data System (ADS)
Qin, Tao; Zhong, Sheng
2018-03-01
For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.
Local empathy provides global minimization of congestion in communication networks
NASA Astrophysics Data System (ADS)
Meloni, Sandro; Gómez-Gardeñes, Jesús
2010-11-01
We present a mechanism to avoid congestion in complex networks based on a local knowledge of traffic conditions and the ability of routers to self-coordinate their dynamical behavior. In particular, routers make use of local information about traffic conditions to either reject or accept information packets from their neighbors. We show that when nodes are only aware of their own congestion state they self-organize into a hierarchical configuration that delays remarkably the onset of congestion although leading to a sharp first-order-like congestion transition. We also consider the case when nodes are aware of the congestion state of their neighbors. In this case, we show that empathy between nodes is strongly beneficial to the overall performance of the system and it is possible to achieve larger values for the critical load together with a smooth, second-order-like, transition. Finally, we show how local empathy minimize the impact of congestion as much as global minimization. Therefore, here we present an outstanding example of how local dynamical rules can optimize the system’s functioning up to the levels reached using global knowledge.
Fixture congestion modulates post-match recovery kinetics in professional soccer players.
Lundberg, Tommy R; Weckström, Kristoffer
2017-01-01
This study examined the influence of fixture congestion on physical performance and biochemical variables in professional male footballers. After 3 competitive matches within a week (3M cycle), 16 players underwent blood sampling and field testing 72 h after the last match. The same tests were performed after a regular 1 match-week cycle (1M cycle). The 1M vs. 3M change scores were compared between Congested (high match exposure) and non-selected Control players. The change score in muscle soreness was greater (effect size 1.0; CI 0.0-1.9) in the Congested players than Controls, indicating a possible negative effect of fixture congestion. There were no effects on sprint and jump performance. The change in plasma (P)-Urea was greater in Congested players than controls (effect size 1.3; CI 0.3-2.2). The effects on other blood variables were either non-existing/trivial, or unclear. Altogether, physical fitness and immune function were not compromised by match congestion, yet some indices of physiological stress and muscle damage were still evident.
MDTM: Optimizing Data Transfer using Multicore-Aware I/O Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Liang; Demar, Phil; Wu, Wenji
2017-05-09
Bulk data transfer is facing significant challenges in the coming era of big data. There are multiple performance bottlenecks along the end-to-end path from the source to destination storage system. The limitations of current generation data transfer tools themselves can have a significant impact on end-to-end data transfer rates. In this paper, we identify the issues that lead to underperformance of these tools, and present a new data transfer tool with an innovative I/O scheduler called MDTM. The MDTM scheduler exploits underlying multicore layouts to optimize throughput by reducing delay and contention for I/O reading and writing operations. With ourmore » evaluations, we show how MDTM successfully avoids NUMA-based congestion and significantly improves end-to-end data transfer rates across high-speed wide area networks.« less
MDTM: Optimizing Data Transfer using Multicore-Aware I/O Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Liang; Demar, Phil; Wu, Wenji
2017-01-01
Bulk data transfer is facing significant challenges in the coming era of big data. There are multiple performance bottlenecks along the end-to-end path from the source to destination storage system. The limitations of current generation data transfer tools themselves can have a significant impact on end-to-end data transfer rates. In this paper, we identify the issues that lead to underperformance of these tools, and present a new data transfer tool with an innovative I/O scheduler called MDTM. The MDTM scheduler exploits underlying multicore layouts to optimize throughput by reducing delay and contention for I/O reading and writing operations. With ourmore » evaluations, we show how MDTM successfully avoids NUMA-based congestion and significantly improves end-to-end data transfer rates across high-speed wide area networks.« less
Jeffery A. Michael; Stephen Reiling; Hsiang-Tai Cheng
1995-01-01
A dichotomous choice contingent valuation model is estimated using an on-site survey of Caribou-Speckled Mountain Wilderness visitors. The results indicate that pre-trip expectations of various trip attributes such as congestion have a greater impact on a trip?s value than the actual conditions of the visit. This research also shows that future economic studies of an...
Atlanta congestion reduction demonstration : national evaluation report.
DOT National Transportation Integrated Search
2014-03-01
This document presents the final report on the national evaluation of the Atlanta Congestion Reduction Demonstration (CRD) under the United States Department of Transportation (U.S. DOT) CRD Program. The Atlanta CRD projects focus on reducing congest...
Modeling the effect of accessibility and congestion in location choice.
DOT National Transportation Integrated Search
2012-12-01
This study explores the relationship between accessibility and congestion, and their impacts on property values. Three research questions are addressed: (1) What is the relation between accessibility and congestion both regional and neighborhood leve...
Speed harmonization and peak-period shoulder use to manage urban freeway congestion.
DOT National Transportation Integrated Search
2009-10-01
Traffic congestion is an increasing problem in the nations urban areas, leading to personal inconvenience, : increased pollution, hampered economic productivity, and reduced quality of life. While traffic congestion tends : to continuously increas...
A toolbox for alleviating traffic congestion and enhancing mobility
DOT National Transportation Integrated Search
2012-06-01
This report presents the Tolling Data Test Plan for the national evaluation of the Los Angeles (LA) Congestion Reduction Demonstration (Metro ExpressLanes) under the United States Department of Transportation (U.S. DOT) Congestion Reduction Demonstra...
Determining queue and congestion in highway work zone bottlenecks.
DOT National Transportation Integrated Search
2011-03-11
Construction zones, though required for infrastructure maintenance, have become congestion choke points on most highway systems in the US. The congestion may create potentially unsafe driving conditions for approaching motorists that do not expect qu...
Determining Queue and Congestion in Highway Work Zone Bottlenecks
DOT National Transportation Integrated Search
2011-03-11
Construction zones, though required for infrastructure maintenance, have become congestion choke points on most highway systems in the US. The congestion may create potentially unsafe driving conditions for approaching motorists that do not expect qu...
Stern, Theodore A.; Hebert, Kathy A.; Musselman, Dominique L.
2013-01-01
Context: Major depressive disorder (MDD) can be challenging to diagnose in patients with congestive heart failure, who often suffer from fatigue, insomnia, weight changes, and other neurovegetative symptoms that overlap with those of depression. Pathophysiologic mechanisms (eg, inflammation, autonomic nervous system dysfunction, cardiac arrhythmias, and altered platelet function) connect depression and congestive heart failure. Objective: We sought to review the prevalence, diagnosis, neurobiology, and treatment of depression associated with congestive heart failure. Data Sources: A search of all English-language articles between January 2003 and January 2013 was conducted using the search terms congestive heart failure and depression. Study Selection: We found 1,498 article abstracts and 19 articles (meta-analyses, systematic reviews, and original research articles) that were selected for inclusion, as they contained information about our focus on diagnosis, treatment, and pathophysiology of depression associated with congestive heart failure. The search was augmented with manual review of reference lists of articles from the initial search. Articles selected for review were determined by author consensus. Data Extraction: The prevalence, diagnosis, neurobiology, and treatment of depression associated with congestive heart failure were reviewed. Particular attention was paid to the safety, efficacy, and tolerability of antidepressant medications commonly used to treat depression and how their side-effect profiles impact the pathophysiology of congestive heart failure. Drug-drug interactions between antidepressant medications and medications used to treat congestive heart failure were examined. Results: MDD is highly prevalent in patients with congestive heart failure. Moreover, the prevalence and severity of depression correlate with the degree of cardiac dysfunction and development of congestive heart failure. Depression increases the risk of congestive heart failure, particularly in those patients with coronary artery disease , and is associated with a poorer quality of life, increased use of health care resources, more frequent adverse clinical events and hospitalizations, and twice the risk of mortality. Conclusions: At present, limited empirical data exist with regard to treatment of depression in the increasingly large population of patients with congestive heart failure. Evidence reveals that both psychotherapeutic treatment (eg, cognitive-behavioral therapy) and pharmacologic treatment (eg, use of the selective serotonin reuptake inhibitor sertraline) are safe and effective in reducing depression severity in patients with cardiovascular disease. Collaborative care programs featuring interventions that work to improve adherence to medical and psychiatric treatments improve both cardiovascular disease and depression outcomes. Depression rating scales such as the 9-item Patient Health Questionnaire should be used to monitor therapeutic efficacy. PMID:24392265
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Wang, Lan; Kim, Yongdai; Li, Runze
2013-10-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION
Wang, Lan; Kim, Yongdai; Li, Runze
2014-01-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843
Voga, Gorazd
2008-01-01
The measurement of pulmonary artery occlusion pressure (PAOP) is important for estimation of left ventricular filling pressure and for distinction between cardiac and non-cardiac etiology of pulmonary edema. Clinical assessment of PAOP, which relies on physical signs of pulmonary congestion, is uncertain. Reliable PAOP measurement can be performed by pulmonary artery catheter, but it is possible also by the use of echocardiography. Several Doppler variables show acceptable correlation with PAOP and can be used for its estimation in cardiac and critically ill patients. Noninvasive PAOP estimation should probably become an integral part of transthoracic and transesophageal echocardiographic evaluation in critically ill patients. However, the limitations of both methods should be taken into consideration, and in specific patients invasive PAOP measurement is still unavoidable, if the exact value of PAOP is needed.
Estimation of the interference coupling into cables within electrically large multiroom structures
NASA Astrophysics Data System (ADS)
Keghie, J.; Kanyou Nana, R.; Schetelig, B.; Potthast, S.; Dickmann, S.
2010-10-01
Communication cables are used to transfer data between components of a system. As a part of the EMC analysis of complex systems, it is necessary to determine which level of interference can be expected at the input of connected devices due to the coupling into the irradiated cable. For electrically large systems consisting of several rooms with cables connecting components located in different rooms, an estimation of the coupled disturbances inside cables using commercial field computation software is often not feasible without several restrictions. In many cases, this is related to the non-availability of computing memory and processing power needed for the computation. In this paper, we are going to show that, starting from a topological analysis of the entire system, weak coupling paths within the system can be can be identified. By neglecting these coupling paths and using the transmission line approach, the original system will be simplified so that a simpler estimation is possible. Using the example of a system which is composed of two rooms, multiple apertures, and a network cable located in both chambers, it is shown that an estimation of the coupled disturbances due to external electromagnetic sources is feasible with this approach. Starting from an incident electromagnetic field, we determine transfer functions describing the coupling means (apertures, cables). Using these transfer functions and the knowledge of the weak coupling paths above, a decision is taken regarding the means for paths that can be neglected during the estimation. The estimation of the coupling into the cable is then made while taking only paths with strong coupling into account. The remaining part of the wiring harness in areas with weak coupling is represented by its input impedance. A comparison with the original network shows a good agreement.
Forecasting the clearance time of freeway accidents
DOT National Transportation Integrated Search
2002-01-01
Freeway congestion is a major and costly problem in many U.S. metropolitan areas. From a traveler's perspective, congestion has costs in terms of longer travel times and lost productivity. From the traffic manager's perspective, congestion causes a f...
The economic cost of traffic congestion in Florida.
DOT National Transportation Integrated Search
2010-08-01
Traffic congestion in the U.S. is bad and getting worse, and it is expensive. Appropriate solutions to this problem require appropriate information. A comprehensive and accurate analysis of congestion costs is a critical tool for planning and impleme...
Air quality assessment at a congested urban intersection
DOT National Transportation Integrated Search
2000-09-01
The deficient transportation system of Beirut results in significant economic losses for the city and causes severe traffic congestion in the urban areas. Proposals have been made for grade separations at some of the worst congested intersections. Th...
Impact of congestion on bus operations and costs.
DOT National Transportation Integrated Search
2003-10-01
Traffic congestion in Northern New Jersey imposes substantial operational and monetary penalty on bus service. The purpose of this project was to quantify the additional time and costs due to traffic congestion. A regression model was developed that ...
Mobility and the Costs of Congestion in New Jersey
DOT National Transportation Integrated Search
2000-02-01
This study measured quantifiable and qualitative impacts of congestion in New Jersey on mobility, the cost of transportation, and economic productivity. It addressed the impacts of congestion on both an individual level (impacts on an average travele...
Measuring non-recurrent congestion in small to medium sized urban areas.
DOT National Transportation Integrated Search
2013-05-01
Understanding the relative magnitudes of recurrent vs. non-recurrent congestion in an urban area is critical to the selection of proper countermeasures and the appropriate allocation of resources to address congestion problems. Small to medium sized ...
Wagenheim, Gavin N; Au, Jason; Gargollo, Patricio C
2016-01-01
Many postoperative complications have been reported after repair of classic bladder exstrophy. We present a case of medicinal leech therapy for glans penis congestion following exstrophy repair in an infant. A 2-week-old male with classic bladder exstrophy underwent complete primary repair. On postoperative day 1, he developed rapidly worsening glans penis venous congestion. Medicinal leech therapy was instituted with antibiotics and blood transfusions to maintain a hematocrit >30%. After 24 hours, venous congestion improved and therapy was discontinued. The patient's remaining hospital course was uncomplicated. Medicinal leeches are an effective therapy to relieve glans penis venous congestion. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.
1988-01-01
Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.
Phase diagram of congested traffic flow: An empirical study
Lee; Lee; Kim
2000-10-01
We analyze traffic data from a highway section containing one effective on-ramp. Based on two criteria, local velocity variation patterns and expansion (or nonexpansion) of congested regions, three distinct congested traffic states are identified. These states appear at different levels of the upstream flux and the on-ramp flux, thereby generating a phase digram of the congested traffic flow. Observed traffic states are compared with recent theoretical analyses and both agreeing and disagreeing features are found.
[Use of antihypertensive drug therapy and risk of development of congestive heart failure].
Sobrino, Javier; Plana, Jaume; Felip, Angela; Doménech, Mónica; Reth, Peter; Adrián, María Jesús; de la Sierra, Alejandro
2004-09-18
It has been suggested that the use of some antihypertensive agents may favour the development of congestive heart failure. The aim of the present study was to evaluate such a possible association in patients who had a new diagnosis of congestive heart failure. This was a retrospective case-control study of 81 patients who had a first hospital admission with a new diagnosis of congestive heart failure (cases) and 162 patients admitted for other hypertensive complications (controls). Previous antihypertensive drug use was registered and the possible association with congestive heart failure was evaluated. The presence of congestive heart failure was not associated with the use of any antihypertensive drug class. When treatments were grouped in classic (diuretics and betablockers) or modern (calcium channel blockers, angiotensin-converting-enzyme inhibitors, alphablockers or angiotensin receptor blockers), a negative association was observed with the latter group, which was observed in 48.1% of cases and 63.6% of controls (odds ratio: 0.532; 95% confidence interval, 0.310-0.913). This association was lost after adjustment for other cardiovascular risk factors or previous hypertensive complications. The development of congestive heart failure was not associated with the use of any specific antihypertensive drug class. From the present evidence, it is not possible to recommend a specific antihypertensive agent in patients at risk of developing congestive heart failure but without evidence of such disease.
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR
The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...
The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...
Omar, Hesham R; Charnigo, Richard; Guglin, Maya
2017-04-01
Congestion is the main contributor to heart failure (HF) morbidity and mortality. We assessed the combined role of congestion and decreased forward flow in predicting morbidity and mortality in acute systolic HF. The Evaluation Study of Congestive Heart Failure and Pulmonary Artery Catheterization Effectiveness trial data set was used to determine if the ratio of simultaneously measured systolic blood pressure (SBP)/right atrial pressure (RAP) on admission predicted HF rehospitalization and 6-month mortality. One hundred ninety-five patients (mean age 56.5 years, 75% men) who received pulmonary artery catheterization were studied. The RAP, SBP, and SBP/RAP had an area under the curve (AUC) of 0.593 (p = 0.0205), 0.585 (p = 0.0359), and 0.621 (p = 0.0026), respectively, in predicting HF rehospitalization. The SBP/RAP was a superior marker of HF rehospitalization compared with RAP alone (difference in AUC 0.0289, p = 0.0385). The optimal criterion of SBP/RAP <11 provided the highest combined sensitivity (77.1%) and specificity (50.9%) in predicting HF rehospitalization. The SBP/RAP had an AUC 0.622, p = 0.0108, and a cut-off value of SBP/RAP <8 had a sensitivity of 61.9% and specificity 64.1% in predicting mortality. Multivariate analysis showed that an SBP/RAP <11 independently predicted rehospitalization for HF (estimated odds ratio 3.318, 95% confidence interval 1.692 to 6.506, p = 0.0005) and an SBP/RAP <8 independently predicted mortality (estimated hazard ratio 2.025, 95% confidence interval 1.069 to 3.833, p = 0.030). In conclusion, SBP/RAP ratio is a marker that identifies a spectrum of complications after hospitalization of patients with decompensated systolic HF, starting with increased incidence of HF rehospitalization at SBP/RAP <11 to increased mortality with SBP/RAP <8. Copyright © 2017 Elsevier Inc. All rights reserved.
Development of congestion performance measures using ITS information.
DOT National Transportation Integrated Search
2003-01-01
The objectives of this study were to define a performance measure(s) that could be used to show congestion levels on critical corridors throughout Virginia and to develop a method to select and calculate performance measures to quantify congestion in...
Urban roadway congestion : annual report
DOT National Transportation Integrated Search
1998-01-01
The annual traffic congestion study is an effort to monitor roadway congestion in major urban areas in the United States. The comparisons to other areas and to previous experiences in each area are facilitated by a database that begins in 1982 and in...
A simulation-optimization-based decision support tool for mitigating traffic congestion.
DOT National Transportation Integrated Search
2009-12-01
"Traffic congestion has grown considerably in the United States over the past twenty years. In this paper, we develop : a robust decision support tool based on simulation optimization to evaluate and recommend congestion-mitigation : strategies to tr...
Study on Earthquake Emergency Evacuation Drill Trainer Development
NASA Astrophysics Data System (ADS)
ChangJiang, L.
2016-12-01
With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.
NASA Astrophysics Data System (ADS)
Rahardjo, K. D.; Dharmaeizar; Nainggolan, G.; Harimurti, K.
2017-08-01
Research has shown that hemodialysis patients with lung congestion have high morbidity and mortality. Patients were assumed to be free of lung congestion if they had reached their post-dialysis dry weight. Most often, to determine if the patient was free of lung congestion, physical examination was used. However, the accuracy of physical examination in detecting lung congestion has not been established. To compare the capabilities of physical examination and lung ultrasound in detection of lung congestion, cross-sectional data collection was conducted on hemodialysis patients. Analysis was done to obtain proportion, sensitivity, specificity, positive predictive value, negative predictive value, and positive likelihood ratio. Sixty patients participated in this study. The inter observer variation of 20 patients revealed a kappa value of 0.828. When all 60 patients were taken into account, we found that 36 patients (57.1%) had lung congestion. Mild lung congestion was found in 24 (38.1%), and 12 (19%) had a moderate degree of congestion. In the analysis comparing jugular venous pressure to lung ultrasound, we found that sensitivity was 0.47 (0.31-0.63), specificity was 0.73 (0.54-0.86), positive predictive value (PPV) was 0.51 (0.36-0.67), negative predictive value (NPV) was 0.70 (0.49-0.84), positive likelihood ratio (PLR) was 1.75 (0.88-3.47), and the negative likelihood ratio (NLR) was 0.72 (0.47-1.12). In terms of lung auscultation, we found that sensitivity was 0.56 (0.39-0.71), specificity was 0.54 (0.35-0.71), PPV was 0.61 (0.44-0.76), NPV was 0.48 (0.31-0.66), PLR was 1.21 (0.73-2.0), and NLR was 0.82 (0.49-1.38). The results of our study showed that jugular venous distention and lung auscultation examination are not reliable means of detecting lung congestion.
The congestion control algorithm based on queue management of each node in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Wei, Yifei; Chang, Lin; Wang, Yali; Wang, Gaoping
2016-12-01
This paper proposes an active queue management mechanism, considering the node's own ability and its importance in the network to set the queue threshold. As the network load increases, local congestion of mobile ad hoc network may lead to network performance degradation, hot node's energy consumption increase even failure. If small energy nodes congested because of forwarding data packets, then when it is used as the source node will cause a lot of packet loss. This paper proposes an active queue management mechanism, considering the node's own ability and its importance in the network to set the queue threshold. Controlling nodes buffer queue in different levels of congestion area probability by adjusting the upper limits and lower limits, thus nodes can adjust responsibility of forwarding data packets according to their own situation. The proposed algorithm will slow down the send rate hop by hop along the data package transmission direction from congestion node to source node so that to prevent further congestion from the source node. The simulation results show that, the algorithm can better play the data forwarding ability of strong nodes, protect the weak nodes, can effectively alleviate the network congestion situation.
Niimi, Yosuke; Mori, Satoko; Takeuchi, Masaki
2017-01-01
Negative pressure wound therapy (NPWT) is a method for treating wound. However, there are no case reports using NPWT for treating congestion after arterialized venous flap. Therefore, this study reported favorable outcomes after using a single-use NPWT system for managing congestion. A 39-year-old man had his index finger caught by a press machine. The finger had a soft tissue defect at the ventral part. An arterialized venous flap taken from the right forearm was transplanted. Perfusion of the flap was favorable, but on postoperative day 5, congestion and the edema of the flap were found. Then, NPWT was initiated. The congestion and edema in the flap were improved without complications such as flap necrosis and wound infection. At 4 months postoperatively, the morphology of the finger was favorable. In this study, NPWT was speculated to force the deeper blood vessels within the flap to dilate with inducing drainage and the simultaneous reduction in excess blood flow to the cortical layer, resulting in the improvement of congestion. Negative pressure wound therapy was used for treating congestion after the transplantation of arterialized venous flap, and the wound was favorably managed. PMID:29270041
Seasonal variation in traffic congestion : a study of three U.S. cities
DOT National Transportation Integrated Search
2008-08-01
Drivers may notice seasonal : changes in patterns of highway : congestion in many urban : areas of the country. These : patterns, however, can be very different for individual cities. : This report looks at congestion patterns over a 3-year period : ...
Projecting future landside congestion delays at the BWI Airport : final report.
DOT National Transportation Integrated Search
2003-10-01
This study examines the access to the BWI Airport in detail. Complex density functions are utilized to determine the : congestion curves faced by passengers arriving at the airport. It is found that all lots have congestion during peak : periods and ...
DOT National Transportation Integrated Search
2002-04-20
The Long Island Transportation Plan to Manage Congestion (LITP 2000) will establish an integrated, multi-modal transportation program of cost-effective strategies to manage congestion and improve the movement of people and goods in Nassau and Suffolk...
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.
1980-01-01
Ten segments of the size 20 x 10 km were aerially photographed and used as training areas for automatic classifications. The study areas was covered by four LANDSAT paths: 235, 236, 237, and 238. The percentages of overall correct classification for these paths range from 79.56 percent for path 238 to 95.59 percent for path 237.
Rouseff, Daniel; Badiey, Mohsen; Song, Aijun
2009-11-01
The performance of a communications equalizer is quantified in terms of the number of acoustic paths that are treated as usable signal. The analysis uses acoustical and oceanographic data collected off the Hawaiian Island of Kauai. Communication signals were measured on an eight-element vertical array at two different ranges, 1 and 2 km, and processed using an equalizer based on passive time-reversal signal processing. By estimating the Rayleigh parameter, it is shown that all paths reflected by the sea surface at both ranges undergo incoherent scattering. It is demonstrated that some of these incoherently scattered paths are still useful for coherent communications. At range of 1 km, optimal communications performance is achieved when six acoustic paths are retained and all paths with more than one reflection off the sea surface are rejected. Consistent with a model that ignores loss from near-surface bubbles, the performance improves by approximately 1.8 dB when increasing the number of retained paths from four to six. The four-path results though are more stable and require less frequent channel estimation. At range of 2 km, ray refraction is observed and communications performance is optimal when some paths with two sea-surface reflections are retained.
Clarke, Malcolm; Shah, Anila; Sharma, Urvashi
2011-01-01
We conducted a systematic review of large, well-conducted randomised trials designed to evaluate the effectiveness of telemonitoring on patients with congestive heart failure (CHF). Two people reviewed 125 articles independently and selected 13 articles for final review. These studies concerned 3480 patients. The follow-up period of the studies was 3-15 months. Pooled estimate results showed that there was an overall reduction in all-cause mortality (P = 0.02). There was no overall reduction in all-cause hospital admission (P = 0.84), although there was a reduction in CHF hospital admission (P = 0.0004). There was no reduction in all-cause emergency admission (P = 0.67). There was no significant difference in length of stay in hospital, medication adherence or cost. Telemonitoring in conjunction with nurse home visiting and specialist unit support can be effective in the clinical management of patients with CHF and help to improve their quality of life.
For what illnesses is a disease management program most effective?
Jutkowitz, Eric; Nyman, John A; Michaud, Tzeyu L; Abraham, Jean M; Dowd, Bryan
2015-02-01
We examined the impact of a disease management (DM) program offered at the University of Minnesota for those with various chronic diseases. Differences-in-differences regression equations were estimated to determine the effect of DM participation by chronic condition on expenditures, absenteeism, hospitalizations, and avoidable hospitalizations. Disease management reduced health care expenditures for individuals with asthma, cardiovascular disease, congestive heart failure, depression, musculoskeletal problems, low back pain, and migraines. Disease management reduced hospitalizations for those same conditions except for congestive heart failure and reduced avoidable hospitalizations for individuals with asthma, depression, and low back pain. Disease management did not have any effect for individuals with diabetes, arthritis, or osteoporosis, nor did DM have any effect on absenteeism. Employers should focus on those conditions that generate savings when purchasing DM programs. This study suggests that the University of Minnesota's DM program reduces hospitalizations for individuals with asthma, cardiovascular disease, depression, musculoskeletal problems, low back pain, and migraines. The program also reduced avoidable hospitalizations for individuals with asthma, depression, and low back pain.
Quick Vegas: Improving Performance of TCP Vegas for High Bandwidth-Delay Product Networks
NASA Astrophysics Data System (ADS)
Chan, Yi-Cheng; Lin, Chia-Liang; Ho, Cheng-Yuan
An important issue in designing a TCP congestion control algorithm is that it should allow the protocol to quickly adjust the end-to-end communication rate to the bandwidth on the bottleneck link. However, the TCP congestion control may function poorly in high bandwidth-delay product networks because of its slow response with large congestion windows. In this paper, we propose an enhanced version of TCP Vegas called Quick Vegas, in which we present an efficient congestion window control algorithm for a TCP source. Our algorithm improves the slow-start and congestion avoidance techniques of original Vegas. Simulation results show that Quick Vegas significantly improves the performance of connections as well as remaining fair when the bandwidth-delay product increases.
Fair and efficient network congestion control based on minority game
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang
2011-12-01
Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.
Traffic jams induced by fluctuation of a leading car.
Nagatani, T
2000-04-01
We present a phase diagram of the different kinds of congested traffic triggered by fluctuation of a leading car in an open system without sources and sinks. Traffic states and density waves are investigated numerically by varying the amplitude of fluctuation using a car following model. The phase transitions among the free traffic, oscillatory congested traffic, and homogeneous congested traffic occur by fluctuation of a leading car. With increasing the amplitude of fluctuation, the transition between the free traffic and oscillatory traffic occurs at lower density and the transition between the homogeneous congested traffic and the oscillatory traffic occurs at higher density. The oscillatory congested traffic corresponds to the coexisting phase. Also, the moving localized clusters appear just above the transition lines.
Road Network State Estimation Using Random Forest Ensemble Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yi; Edara, Praveen; Chang, Yohan
Network-scale travel time prediction not only enables traffic management centers (TMC) to proactively implement traffic management strategies, but also allows travelers make informed decisions about route choices between various origins and destinations. In this paper, a random forest estimator was proposed to predict travel time in a network. The estimator was trained using two years of historical travel time data for a case study network in St. Louis, Missouri. Both temporal and spatial effects were considered in the modeling process. The random forest models predicted travel times accurately during both congested and uncongested traffic conditions. The computational times for themore » models were low, thus useful for real-time traffic management and traveler information applications.« less
Data acquisition and path selection decision making for an autonomous roving vehicle
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Shen, C. N.; Yerazunis, S. W.
1976-01-01
Problems related to the guidance of an autonomous rover for unmanned planetary exploration were investigated. Topics included in these studies were: simulation on an interactive graphics computer system of the Rapid Estimation Technique for detection of discrete obstacles; incorporation of a simultaneous Bayesian estimate of states and inputs in the Rapid Estimation Scheme; development of methods for estimating actual laser rangefinder errors and their application to date provided by Jet Propulsion Laboratory; and modification of a path selection system simulation computer code for evaluation of a hazard detection system based on laser rangefinder data.
Enhancing CORSIM for simulating high occupancy/toll lane operations.
DOT National Transportation Integrated Search
2012-03-15
Congestion pricing has been advocated as an efficient way to mitigate traffic congestion : since 1920s. A prevalent form of congestion pricing in the U.S. is high occupancy/toll (HOT) : lanes. The operating objective of HOT lanes is to improve the th...
Code of Federal Regulations, 2011 CFR
2011-04-01
.... Congestion management means the application of strategies to improve system performance and reliability by... SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at... management system or process is a systematic and regionally accepted approach for managing congestion that...
Code of Federal Regulations, 2010 CFR
2010-04-01
.... Congestion management means the application of strategies to improve system performance and reliability by... SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at... management system or process is a systematic and regionally accepted approach for managing congestion that...
Code of Federal Regulations, 2013 CFR
2013-04-01
... SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at.... Congestion management means the application of strategies to improve system performance and reliability by... management system or process is a systematic and regionally accepted approach for managing congestion that...
Code of Federal Regulations, 2014 CFR
2014-04-01
... SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at.... Congestion management means the application of strategies to improve system performance and reliability by... management system or process is a systematic and regionally accepted approach for managing congestion that...
Code of Federal Regulations, 2012 CFR
2012-04-01
... SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at.... Congestion management means the application of strategies to improve system performance and reliability by... management system or process is a systematic and regionally accepted approach for managing congestion that...
DOT National Transportation Integrated Search
2012-09-01
Capacity, demand, and vehicle based emissions reduction strategies are compared for several pollutants employing aggregate US : congestion and vehicle fleet condition data. We find that congestion mitigation does not inevitably lead to reduced emissi...
Statewide planning scenario synthesis : transportation congestion measurement and management.
DOT National Transportation Integrated Search
2005-09-01
This study is a review of current practices in 13 states to: (1) measure traffic congestion and its costs; and (2) manage congestion with programs and techniques that do not involve the building of new highway capacity. In regard to the measures of c...
Feasibility of a special-purpose computer to solve the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Gritton, E. C.; King, W. S.; Sutherland, I.; Gaines, R. S.; Gazley, C., Jr.; Grosch, C.; Juncosa, M.; Petersen, H.
1978-01-01
Orders-of-magnitude improvements in computer performance can be realized with a parallel array of thousands of fast microprocessors. In this architecture, wiring congestion is minimized by limiting processor communication to nearest neighbors. When certain standard algorithms are applied to a viscous flow problem and existing LSI technology is used, performance estimates of this conceptual design show a dramatic decrease in computational time when compared to the CDC 7600.
Multipath Very-Simplified Estimate of Adversary Sequence Interruption v. 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snell, Mark K.
2017-10-10
MP VEASI is a training tool that models physical protection systems for fixed sites using Adversary Sequence Diagrams (ASDs) and then uses the ASD to find most-vulnerable adversary paths through the ASD. The identified paths have the lowest Probability of Interruption among all the paths through the ASD.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
Impact of traffic congestion on road accidents: a spatial analysis of the M25 motorway in England.
Wang, Chao; Quddus, Mohammed A; Ison, Stephen G
2009-07-01
Traffic congestion and road accidents are two external costs of transport and the reduction of their impacts is often one of the primary objectives for transport policy makers. The relationship between traffic congestion and road accidents however is not apparent and less studied. It is speculated that there may be an inverse relationship between traffic congestion and road accidents, and as such this poses a potential dilemma for transport policy makers. This study aims to explore the impact of traffic congestion on the frequency of road accidents using a spatial analysis approach, while controlling for other relevant factors that may affect road accidents. The M25 London orbital motorway, divided into 70 segments, was chosen to conduct this study and relevant data on road accidents, traffic and road characteristics were collected. A robust technique has been developed to map M25 accidents onto its segments. Since existing studies have often used a proxy to measure the level of congestion, this study has employed a precise congestion measurement. A series of Poisson based non-spatial (such as Poisson-lognormal and Poisson-gamma) and spatial (Poisson-lognormal with conditional autoregressive priors) models have been used to account for the effects of both heterogeneity and spatial correlation. The results suggest that traffic congestion has little or no impact on the frequency of road accidents on the M25 motorway. All other relevant factors have provided results consistent with existing studies.
Bonham, A C; Kott, K S; Ravi, K; Kappagoda, C T; Joad, J P
1996-05-15
1. This study tested the hypothesis that substance P stimulates rapidly adapting receptors (RARs), contributes to the increase in RAR activity produced by mild pulmonary congestion, and evokes an augmented response from RARs when combined with near-threshold levels of pulmonary congestion. 2. RAR activity, peak tracheal pressure, arterial blood pressure and left atrial pressure were measured in paralysed, anaesthetized and ventilated rabbits. Substance P was given i.v. in one-half log incremental doses to a maximum of 3 micrograms kg-1. Mild pulmonary congestion was produced by inflating a balloon in the left atrium to increase left atrial pressure by 5 mmHg. Near-threshold levels of pulmonary congestion were produced by increasing left atrial pressure by 2 mmHg. 3. Substance P produced dose-dependent increases in RAR activity. The highest dose given increased the activity from 1.3 +/- 0.5 to 11.0 +/- 3.1 impulses bin-1. Increases in left atrial pressure of 5 mmHg increased RAR activity from 3.8 +/- 1.4 to 14.7 +/- 3.9 impulses bin-1. Blockade of NK1 receptors with CP 96345 significantly attenuated RAR responses to substance P and to mild pulmonary congestion. 4. Doses of substance P, which alone had no effect, stimulated the RARs when delivered during near-threshold levels of pulmonary congestion. 5. The findings suggest that substance P augments the stimulatory effect of mild pulmonary congestion on RAR activity, most probably by enhancing hydraulically induced microvascular leak.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Gately, Conor K; Hutyra, Lucy R; Peterson, Scott; Sue Wing, Ian
2017-10-01
On-road emissions vary widely on time scales as short as minutes and length scales as short as tens of meters. Detailed data on emissions at these scales are a prerequisite to accurately quantifying ambient pollution concentrations and identifying hotspots of human exposure within urban areas. We construct a highly resolved inventory of hourly fluxes of CO, NO 2 , NO x , PM 2.5 and CO 2 from road vehicles on 280,000 road segments in eastern Massachusetts for the year 2012. Our inventory integrates a large database of hourly vehicle speeds derived from mobile phone and vehicle GPS data with multiple regional datasets of vehicle flows, fleet characteristics, and local meteorology. We quantify the 'excess' emissions from traffic congestion, finding modest congestion enhancement (3-6%) at regional scales, but hundreds of local hotspots with highly elevated annual emissions (up to 75% for individual roadways in key corridors). Congestion-driven reductions in vehicle fuel economy necessitated 'excess' consumption of 113 million gallons of motor fuel, worth ∼ $415M, but this accounted for only 3.5% of the total fuel consumed in Massachusetts, as over 80% of vehicle travel occurs in uncongested conditions. Across our study domain, emissions are highly spatially concentrated, with 70% of pollution originating from only 10% of the roads. The 2011 EPA National Emissions Inventory (NEI) understates our aggregate emissions of NO x , PM 2.5 , and CO 2 by 46%, 38%, and 18%, respectively. However, CO emissions agree within 5% for the two inventories, suggesting that the large biases in NO x and PM 2.5 emissions arise from differences in estimates of diesel vehicle activity. By providing fine-scale information on local emission hotspots and regional emissions patterns, our inventory framework supports targeted traffic interventions, transparent benchmarking, and improvements in overall urban air quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-15
... until December 20, 2013, relieving vehicular traffic congestion during the weekday and weekend daytime... final temporary could result in additional vehicular traffic congestion without providing any additional... anticipates continued vehicular traffic congestion over the Gilmerton Highway Bridge due to the reduction of...
Alternative Fuels Data Center: Deploying Alternative Fuel Vehicles and
Infrastructure in Chicago, Illinois, Through the Congestion Mitigation and Air Quality Improvement Program and Infrastructure in Chicago, Illinois, Through the Congestion Mitigation and Air Quality Vehicles and Infrastructure in Chicago, Illinois, Through the Congestion Mitigation and Air Quality
NASA Astrophysics Data System (ADS)
Goodrich, J. P.; Zona, D.; Gioli, B.; Murphy, P.; Burba, G. G.; Oechel, W. C.
2015-12-01
Expanding eddy covariance measurements of CO2 and CH4 fluxes in the Arctic is critical for refining the global C budget. Continuous measurements are particularly challenging because of the remote locations, low power availability, and extreme weather conditions. The necessity for tailoring instrumentation at different sites further complicates the interpretation of results and may add uncertainty to estimates of annual CO2 budgets. We investigated the influence of different sensor combinations on FCO2, latent heat (LE), and FCH4, and assessed the differences in annual FCO2 estimated with different instrumentation at the same sites. Using data from four sites across the North Slope of Alaska, we resolved FCO2 and FCH4 to within 5% using different combinations of open- and closed-path gas analyzers and within 10% using heated and non-heated anemometers. A continuously heated anemometer increased data coverage relative to non-heated anemometers while resulting in comparable annual FCO2, despite over-estimating sensible heat fluxes by 15%. We also implemented an intermittent heating strategy whereby activation only when ice or snow blockage of the transducers was detected. This resulted in comparable data coverage (~ 60%) to the continuously heated anemometer, while avoiding potential over-estimation of sensible heat and gas fluxes. We found good agreement in FCO2 and FCH4 from two closed-path and one open-path gas analyzer, despite the need for large spectral corrections of closed-path fluxes and density and temperature corrections to open-path sensors. However, data coverage was generally greater when using closed-path, especially during cold seasons (36-40% vs 10-14% for the open path), when fluxes from Arctic regions are particularly uncertain and potentially critical to annual C budgets. Measurement of Arctic LE remains a challenge due to strong attenuation along sample tubes, even when heated, that could not be accounted for with spectral corrections.
Estimation of the full marginal costs of port related truck traffic.
Berechman, Joseph
2009-11-01
NY region is expected to grow by additional 1 million people by 2020, which translates into roughly 70 million more tons of goods to be delivered annually. Due to lack of rail capacity, mainly trucks will haul this volume of freight, challenging an already much constrained highway network. What are the total costs associated with this additional traffic, in particular, congestion, safety and emission? Since a major source of this expected flow is the Port of New York-New Jersey, this paper focuses on the estimation of the full marginal costs of truck traffic resulting from the further expansion of the port's activities.
How Big is Too Big for Hubs: Marginal Profitability in Hub-and-Spoke Networks
NASA Technical Reports Server (NTRS)
Ross, Leola B.; Schmidt, Stephen J.
1997-01-01
Increasing the scale of hub operations at major airports has led to concerns about congestion at excessively large hubs. In this paper, we estimate the marginal cost of adding spokes to an existing hub network. We observe entry/non-entry decisions on potential spokes from existing hubs, and estimate both a variable profit function for providing service in markets using that spoke as well as the fixed costs of providing service to the spoke. We let the fixed costs depend upon the scale of operations at the hub, and find the hub size at which spoke service costs are minimized.
Are Tornadoes Getting Stronger?
NASA Astrophysics Data System (ADS)
Elsner, J.; Jagger, T.
2013-12-01
A cumulative logistic model for tornado damage category is developed and examined. Damage path length and width are significantly correlated to the odds of a tornado receiving the next highest damage category. Given values for the cube root of path length and square root of path width, the model predicts a probability for each category. The length and width coefficients are insensitive to the switch to the Enhanced Fujita (EF) scale and to distance from nearest city although these variables are statistically significant in the model. The width coefficient is sensitive to whether or not the tornado caused at least one fatality. This is likely due to the fact that the dimensions and characteristics of the damage path for such events are always based on ground surveys. The model predicted probabilities across the categories are then multiplied by the center wind speed from the categorical EF scale to obtain an estimate of the highest tornado wind speed on a continuous scale in units of meters per second. The estimated wind speeds correlate at a level of .82 (.46, .95) [95% confidence interval] to wind speeds estimated independently from a doppler radar calibration. The estimated wind speeds allow analyses to be done on the tornado database that are not possible with the categorical scale. The modeled intensities can be used in climatology and in environmental and engineering applications. More work needs to be done to understand the upward trends in path length and width. The increases lead to an apparent increase in tornado intensity across all EF categories.
DOT National Transportation Integrated Search
2014-04-01
This paper presents lessons learned from household traveler surveys administered in Seattle and Atlanta as part of the evaluation of the Urban Partnership Agreement and Congestion Reduction Demonstration Programs. The surveys use a two-stage panel su...
23 CFR 970.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2011 CFR
2011-04-01
... experiencing congestion, the NPS shall develop a separate CMS to cover those facilities. Approaches may include... congestion management strategies; (v) Determine methods to monitor and evaluate the performance of the multi... means the level at which transportation system performance is no longer acceptable due to traffic...
23 CFR 970.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2010 CFR
2010-04-01
... experiencing congestion, the NPS shall develop a separate CMS to cover those facilities. Approaches may include... congestion management strategies; (v) Determine methods to monitor and evaluate the performance of the multi... means the level at which transportation system performance is no longer acceptable due to traffic...
23 CFR 973.214 - Indian lands congestion management system (CMS).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Indian lands congestion management system (CMS). 973.214... HIGHWAYS MANAGEMENT SYSTEMS PERTAINING TO THE BUREAU OF INDIAN AFFAIRS AND THE INDIAN RESERVATION ROADS PROGRAM Bureau of Indian Affairs Management Systems § 973.214 Indian lands congestion management system...
DOT National Transportation Integrated Search
2002-01-01
A relationship between traffic flow variables and crash characteristics can greatly help the traffic engineer in the field to arrive at appropriate congestion mitigation measures that not only alleviate congestion and save time but also reduce the pr...
14 CFR 137.49 - Operations over other than congested areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Operations over other than congested areas. 137.49 Section 137.49 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... congested areas below 500 feet above the surface and closer than 500 feet to persons, vessels, vehicles, and...
76 FR 70122 - Plan for Conduct of 2012 Electric Transmission Congestion Study
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... preparations for the 2012 Congestion Study, and seeks comments on what publicly-available data and information...), and Microsoft PowerPoint (.ppt). The Department intends to use only data that is publicly available... Study, the Department gathered historical congestion data obtained from existing studies prepared by...
Quantum random walks on congested lattices and the effect of dephasing.
Motes, Keith R; Gilchrist, Alexei; Rohde, Peter P
2016-01-27
We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker's direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices.
Mobile robot dynamic path planning based on improved genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, Yong; Zhou, Heng; Wang, Ying
2017-08-01
In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.
Integrative Assessment of Congestion in Heart Failure Throughout the Patient Journey.
Girerd, Nicolas; Seronde, Marie-France; Coiro, Stefano; Chouihed, Tahar; Bilbault, Pascal; Braun, François; Kenizou, David; Maillier, Bruno; Nazeyrollas, Pierre; Roul, Gérard; Fillieux, Ludivine; Abraham, William T; Januzzi, James; Sebbag, Laurent; Zannad, Faiez; Mebazaa, Alexandre; Rossignol, Patrick
2018-04-01
Congestion is one of the main predictors of poor patient outcome in patients with heart failure. However, congestion is difficult to assess, especially when symptoms are mild. Although numerous clinical scores, imaging tools, and biological tests are available to assist physicians in ascertaining and quantifying congestion, not all are appropriate for use in all stages of patient management. In recent years, multidisciplinary management in the community has become increasingly important to prevent heart failure hospitalizations. Electronic alert systems and communication platforms are emerging that could be used to facilitate patient home monitoring that identifies congestion from heart failure decompensation at an earlier stage. This paper describes the role of congestion detection methods at key stages of patient care: pre-admission, admission to the emergency department, in-hospital management, and lastly, discharge and continued monitoring in the community. The multidisciplinary working group, which consisted of cardiologists, emergency physicians, and a nephrologist with both clinical and research backgrounds, reviewed the current literature regarding the various scores, tools, and tests to detect and quantify congestion. This paper describes the role of each tool at key stages of patient care and discusses the advantages of telemedicine as a means of providing true integrated patient care. Copyright © 2018. Published by Elsevier Inc.
Does rush hour see a rush of emotions? Driver mood in conditions likely to exhibit congestion
Morris, Eric A; Hirsch, Jana A.
2016-01-01
Polls show that a large portion of the public considers traffic congestion to be a problem and believes a number of policy interventions would ameliorate it. However, most of the public rejects new taxes and fees to fund these improvements. This may be because of a disconnect between the public's stated antipathy towards congestion and the recalled emotional costs congestion imposes. To explore this, we use a large and representative sample drawn from the American Time Use Survey to examine how drivers experience four emotions (happiness, sadness, stress, and fatigue), plus a constructed composite mood variable, when they travel in peak periods, in large cities, in city centers, and in combinations of these. We also explore the interactions between these indicators and trip duration. We find evidence that drivers in the largest cities at the very peak of rush hour (5:00pm-6:00pm) are in a less positive mood, presumably because of congestion. However, this effect, though significant, is small, and we find no significant results using broader definitions of the peak period. In all, our findings suggest that congestion's impact on drivers as a group is quite limited. This may help explain why the public's attitude toward painful financial trade-offs to address congestion is lukewarm. PMID:27231669
He, Huaguang; Li, Taoshen; Feng, Luting; Ye, Jin
2017-07-15
Different from the traditional wired network, the fundamental cause of transmission congestion in wireless ad hoc networks is medium contention. How to utilize the congestion state from the MAC (Media Access Control) layer to adjust the transmission rate is core work for transport protocol design. However, recent works have shown that the existing cross-layer congestion detection solutions are too complex to be deployed or not able to characterize the congestion accurately. We first propose a new congestion metric called frame transmission efficiency (i.e., the ratio of successful transmission delay to the frame service delay), which describes the medium contention in a fast and accurate manner. We further present the design and implementation of RECN (ECN and the ratio of successful transmission delay to the frame service delay in the MAC layer, namely, the frame transmission efficiency), a general supporting scheme that adjusts the transport sending rate through a standard ECN (Explicit Congestion Notification) signaling method. Our method can be deployed on commodity switches with small firmware updates, while making no modification on end hosts. We integrate RECN transparently (i.e., without modification) with TCP on NS2 simulation. The experimental results show that RECN remarkably improves network goodput across multiple concurrent TCP flows.
Voga, Gorazd
2008-01-01
The measurement of pulmonary artery occlusion pressure (PAOP) is important for estimation of left ventricular filling pressure and for distinction between cardiac and non-cardiac etiology of pulmonary edema. Clinical assessment of PAOP, which relies on physical signs of pulmonary congestion, is uncertain. Reliable PAOP measurement can be performed by pulmonary artery catheter, but it is possible also by the use of echocardiography. Several Doppler variables show acceptable correlation with PAOP and can be used for its estimation in cardiac and critically ill patients. Noninvasive PAOP estimation should probably become an integral part of transthoracic and transesophageal echocardiographic evaluation in critically ill patients. However, the limitations of both methods should be taken into consideration, and in specific patients invasive PAOP measurement is still unavoidable, if the exact value of PAOP is needed. PMID:18394183
Integrated Traffic Flow Management Decision Making
NASA Technical Reports Server (NTRS)
Grabbe, Shon R.; Sridhar, Banavar; Mukherjee, Avijit
2009-01-01
A generalized approach is proposed to support integrated traffic flow management decision making studies at both the U.S. national and regional levels. It can consider tradeoffs between alternative optimization and heuristic based models, strategic versus tactical flight controls, and system versus fleet preferences. Preliminary testing was accomplished by implementing thirteen unique traffic flow management models, which included all of the key components of the system and conducting 85, six-hour fast-time simulation experiments. These experiments considered variations in the strategic planning look-ahead times, the replanning intervals, and the types of traffic flow management control strategies. Initial testing indicates that longer strategic planning look-ahead times and re-planning intervals result in steadily decreasing levels of sector congestion for a fixed delay level. This applies when accurate estimates of the air traffic demand, airport capacities and airspace capacities are available. In general, the distribution of the delays amongst the users was found to be most equitable when scheduling flights using a heuristic scheduling algorithm, such as ration-by-distance. On the other hand, equity was the worst when using scheduling algorithms that took into account the number of seats aboard each flight. Though the scheduling algorithms were effective at alleviating sector congestion, the tactical rerouting algorithm was the primary control for avoiding en route weather hazards. Finally, the modeled levels of sector congestion, the number of weather incursions, and the total system delays, were found to be in fair agreement with the values that were operationally observed on both good and bad weather days.
Buteau, Stephane; Goldberg, Mark S; Burnett, Richard T; Gasparrini, Antonio; Valois, Marie-France; Brophy, James M; Crouse, Dan L; Hatzopoulou, Marianne
2018-04-01
Persons with congestive heart failure may be at higher risk of the acute effects related to daily fluctuations in ambient air pollution. To meet some of the limitations of previous studies using grouped-analysis, we developed a cohort study of persons with congestive heart failure to estimate whether daily non-accidental mortality were associated with spatially-resolved, daily exposures to ambient nitrogen dioxide (NO 2 ) and ozone (O 3 ), and whether these associations were modified according to a series of indicators potentially reflecting complications or worsening of health. We constructed the cohort from the linkage of administrative health databases. Daily exposure was assigned from different methods we developed previously to predict spatially-resolved, time-dependent concentrations of ambient NO 2 (all year) and O 3 (warm season) at participants' residences. We performed two distinct types of analyses: a case-crossover that contrasts the same person at different times, and a nested case-control that contrasts different persons at similar times. We modelled the effects of air pollution and weather (case-crossover only) on mortality using distributed lag nonlinear models over lags 0 to 3 days. We developed from administrative health data a series of indicators that may reflect the underlying construct of "declining health", and used interactions between these indicators and the cross-basis function for air pollutant to assess potential effect modification. The magnitude of the cumulative as well as the lag-specific estimates of association differed in many instances according to the metric of exposure. Using the back-extrapolation method, which is our preferred exposure model, we found for the case-crossover design a cumulative mean percentage changes (MPC) in daily mortality per interquartile increment in NO 2 (8.8 ppb) of 3.0% (95% CI: -0.9, 6.9%) and for O 3 (16.5 ppb) 3.5% (95% CI: -4.5, 12.1). For O 3 there was strong confounding by weather (unadjusted MPC = 7.1%; 95% CI: 1.7, 12.7%). For the nested case-control approach the cumulative MPC for NO 2 in daily mortality was 2.9% (95% CI: -0.9, 6.9%) and for O 3 7.3% (95% CI: 3.0, 11.9%). We found evidence of effect modification between daily mortality and cumulative NO 2 and O 3 according to the prescribed dose of furosemide in the nested case-control analysis, but not in the case-crossover analysis. Mortality in congestive heart failure was associated with exposure to daily ambient NO 2 and O 3 predicted from a back-extrapolation method using a land use regression model from dense sampling surveys. The methods used to assess exposure can have considerable influence on the estimated acute health effects of the two air pollutants. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Broadband Microwave Radiometer Technique at X-band for Rain and Drop Size Distribution Estimation
NASA Technical Reports Server (NTRS)
Meneghini, R.
2005-01-01
Radiometric brightess temperatures below about 12 GHz provide accurate estimates of path attenuation through precipitation and cloud water. Multiple brightness temperature measurements at X-band frequencies can be used to estimate rainfall rate and parameters of the drop size distribution once correction for cloud water attenuation is made. Employing a stratiform storm model, calculations of the brightness temperatures at 9.5, 10 and 12 GHz are used to simulate estimates of path-averaged median mass diameter, number concentration and rainfall rate. The results indicate that reasonably accurate estimates of rainfall rate and information on the drop size distribution can be derived over ocean under low to moderate wind speed conditions.
NASA Astrophysics Data System (ADS)
Vadivel, R.; Bhaskaran, V. Murali
2010-10-01
The main reason for packet loss in ad hoc networks is the link failure or node failure. In order to increase the path stability, it is essential to distinguish and moderate the failures. By knowing individual link stability along a path, path stability can be identified. In this paper, we develop an adaptive reliable routing protocol using combined link stability estimation for mobile ad hoc networks. The main objective of this protocol is to determine a Quality of Service (QoS) path along with prolonging the network life time and to reduce the packet loss. We calculate a combined metric for a path based on the parameters Link Expiration Time, Node Remaining Energy and Node Velocity and received signal strength to predict the link stability or lifetime. Then, a bypass route is established to retransmit the lost data, when a link failure occurs. By simulation results, we show that the proposed reliable routing protocol achieves high delivery ratio with reduced delay and packet drop.
Statistical estimation of ultrasonic propagation path parameters for aberration correction.
Waag, Robert C; Astheimer, Jeffrey P
2005-05-01
Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.
Temporary Losses of Highway Capacity and Impacts on Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, S.M.
2002-07-31
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, and suboptimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-route or reschedule trips. Prior to this study, no nationwide estimates of temporary losses of highway capacitymore » had been made by type of capacity-reducing event. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. This study is an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events. The objective of this study was to develop and implement methods for producing national-level estimates of the loss of capacity on the nation's highway facilities due to temporary phenomena as well as estimates of the impacts of such losses. The estimates produced by this study roughly indicate the magnitude of problems that are likely be addressed by the Congress during the next re-authorization of the Surface Transportation Programs. The scope of the study includes all urban and rural freeways and principal arterials in the nation's highway system for 1999. Specifically, this study attempts to quantify the extent of temporary capacity losses due to crashes, breakdowns, work zones, weather, and sub-optimal signal timing. These events can cause impacts such as capacity reduction, delays, trip rescheduling, rerouting, reduced mobility, and reduced reliability. This study focuses on the reduction of capacity and resulting delays caused by the temporary events mentioned above. Impacts other than capacity losses and delay, such as re-routing, rescheduling, reduced mobility, and reduced reliability, are not covered in this phase of research.« less
An improved empirical model for diversity gain on Earth-space propagation paths
NASA Technical Reports Server (NTRS)
Hodge, D. B.
1981-01-01
An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
Bonham, A C; Kott, K S; Ravi, K; Kappagoda, C T; Joad, J P
1996-01-01
1. This study tested the hypothesis that substance P stimulates rapidly adapting receptors (RARs), contributes to the increase in RAR activity produced by mild pulmonary congestion, and evokes an augmented response from RARs when combined with near-threshold levels of pulmonary congestion. 2. RAR activity, peak tracheal pressure, arterial blood pressure and left atrial pressure were measured in paralysed, anaesthetized and ventilated rabbits. Substance P was given i.v. in one-half log incremental doses to a maximum of 3 micrograms kg-1. Mild pulmonary congestion was produced by inflating a balloon in the left atrium to increase left atrial pressure by 5 mmHg. Near-threshold levels of pulmonary congestion were produced by increasing left atrial pressure by 2 mmHg. 3. Substance P produced dose-dependent increases in RAR activity. The highest dose given increased the activity from 1.3 +/- 0.5 to 11.0 +/- 3.1 impulses bin-1. Increases in left atrial pressure of 5 mmHg increased RAR activity from 3.8 +/- 1.4 to 14.7 +/- 3.9 impulses bin-1. Blockade of NK1 receptors with CP 96345 significantly attenuated RAR responses to substance P and to mild pulmonary congestion. 4. Doses of substance P, which alone had no effect, stimulated the RARs when delivered during near-threshold levels of pulmonary congestion. 5. The findings suggest that substance P augments the stimulatory effect of mild pulmonary congestion on RAR activity, most probably by enhancing hydraulically induced microvascular leak. Images Figure 6 PMID:8735708
Pérez, Alejandro; von Lilienfeld, O Anatole
2011-08-09
Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.
Metamorphic P-T paths and Precambrian crustal growth in East Antarctica
NASA Technical Reports Server (NTRS)
Harley, S. L.
1988-01-01
The metamorphic constraints on crustal thicknesses in Archean and post-Archean terranes are summarized along with possible implications for tectonic processes. It is important to recognize that P-T estimates represent perturbed conditions and should not be used to estimate steady state geothermal gradients or crustal thicknesses. The example is cited of the Dora Maira complex in the French Alps, where crustal rocks record conditions of 35 kbar and 800 C, implying their subduction to depths of 100 km or more, followed by subsequent uplift to the surface. Therefore such P-T estimates tell more about processes than crustal thicknesses. More importantly, according to the author, are determinations of P-T paths, particularly coupled with age measurements, because these may provide constraints on how and when perturbed conditions relax back to steady state conditions. P-T paths are illustrated that should be expected from specific tectonic processes, including Tibetan style collision, with and without subsequent extension, rifting of thin or thickened crust, and magmatic accretion. Growth of new crust, associated with magmatic accretion, for example, could possibly be monitored with these P-T paths.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... opening periods for the bridge during the day, relieving vehicular traffic congestion during the weekday... the rush hour periods and on the weekends, we anticipate a decrease in vehicular traffic congestion... vehicular traffic congestion over the Gilmerton Highway Bridge due to the reduction of highway lanes and...
DOT National Transportation Integrated Search
2011-10-19
"Highway stakeholders continue to support research studies that address critical issues of the current era, including congestion mitigation and revenue generation. A mechanism that addresses both concerns is congestion pricing which establishes a dir...
NASA Astrophysics Data System (ADS)
Zepf, Joachim; Rufa, Gerhard
1994-04-01
This paper focuses on the transient performance analysis of the congestion and flow control mechanisms in CCITT Signaling System No. 7 (SS7). Special attention is directed to the impacts of the introduction of intelligent services and new applications, e.g., Freephone, credit card services, user-to-user signaling, etc. In particular, we show that signaling traffic characteristics like signaling scenarios or signaling message length as well as end-to-end signaling capabilities have a significant influence on the congestion and flow control and, therefore, on the real-time signaling performance. One important result of our performance studies is that if, e.g., intelligent services are introduced, the SS7 congestion and flow control does not work correctly. To solve this problem, some reinvestigations into these mechanisms would be necessary. Therefore, some approaches, e.g., modification of the Signaling Connection Control Part (SCCP) congestion control, usage of the SCCP relay function, or a redesign of the MTP flow control procedures are discussed in order to guarantee the efficacy of the congestion and flow control mechanisms also in the future.
Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks
NASA Astrophysics Data System (ADS)
Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong
2010-04-01
Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.
Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin
2009-09-01
Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Understanding congested travel in urban areas
Çolak, Serdar; Lima, Antonio; González, Marta C.
2016-01-01
Rapid urbanization and increasing demand for transportation burdens urban road infrastructures. The interplay of number of vehicles and available road capacity on their routes determines the level of congestion. Although approaches to modify demand and capacity exist, the possible limits of congestion alleviation by only modifying route choices have not been systematically studied. Here we couple the road networks of five diverse cities with the travel demand profiles in the morning peak hour obtained from billions of mobile phone traces to comprehensively analyse urban traffic. We present that a dimensionless ratio of the road supply to the travel demand explains the percentage of time lost in congestion. Finally, we examine congestion relief under a centralized routing scheme with varying levels of awareness of social good and quantify the benefits to show that moderate levels are enough to achieve significant collective travel time savings. PMID:26978719
Understanding congested travel in urban areas
NASA Astrophysics Data System (ADS)
Çolak, Serdar; Lima, Antonio; González, Marta C.
2016-03-01
Rapid urbanization and increasing demand for transportation burdens urban road infrastructures. The interplay of number of vehicles and available road capacity on their routes determines the level of congestion. Although approaches to modify demand and capacity exist, the possible limits of congestion alleviation by only modifying route choices have not been systematically studied. Here we couple the road networks of five diverse cities with the travel demand profiles in the morning peak hour obtained from billions of mobile phone traces to comprehensively analyse urban traffic. We present that a dimensionless ratio of the road supply to the travel demand explains the percentage of time lost in congestion. Finally, we examine congestion relief under a centralized routing scheme with varying levels of awareness of social good and quantify the benefits to show that moderate levels are enough to achieve significant collective travel time savings.
Gretener, S B; Läuchli, S; Leu, A J; Koppensteiner, R; Franzeck, U K
2000-01-01
The aim of the present study was to assess the influence of venous and lymphatic congestion on lymph capillary pressure (LCP) in the skin of the foot dorsum of healthy volunteers and of patients with lymph edema. LCP was measured at the foot dorsum of 12 patients with lymph edema and 18 healthy volunteers using the servo-nulling technique. Glass micropipettes (7-9 microm) were inserted under microscopic control into lymphatic microvessels visualized by fluorescence microlymphography before and during venous congestion. Venous and lymphatic congestion was attained by cuff compression (50 mm Hg) at the thigh level. Simultaneously, the capillary filtration rate was measured using strain gauge plethysmography. The mean LCP in patients with lymph edema increased significantly (p < 0.05) during congestion (15.7 +/- 8.8 mm Hg) compared to the control value (12.2 +/- 8.9 mm Hg). The corresponding values of LCP in healthy volunteers were 4.3 +/- 2.6 mm Hg during congestion and 2.6 +/- 2.8 mm Hg during control conditions (p < 0.01). The mean increase in LCP in patients with lymph edema was 3.4 +/- 4.1 mm Hg, and 1.7 +/- 2.0 mm Hg in healthy volunteers (NS). The maximum spread of the lymph capillary network in patients increased from 13.9 +/- 6.8 mm before congestion to 18.8 +/- 8.2 mm during thigh compression (p < 0.05). No increase could be observed in healthy subjects. In summary, venous and lymphatic congestion by cuff compression at the thigh level results in a significant increase in LCP in healthy volunteers as well as in patients with lymph edema. The increased spread of the contrast medium in the superficial microlymphatics in lymph edema patients indicates a compensatory mechanism for lymphatic drainage during congestion of the veins and lymph collectors of the leg. Copyright 2000 S. Karger AG, Basel
Quantum random walks on congested lattices and the effect of dephasing
Motes, Keith R.; Gilchrist, Alexei; Rohde, Peter P.
2016-01-01
We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker’s direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices. PMID:26812924
Felker, G Michael; Mentz, Robert J; Adams, Kirkwood F; Cole, Robert T; Egnaczyk, Gregory F; Patel, Chetan B; Fiuzat, Mona; Gregory, Douglas; Wedge, Patricia; O'Connor, Christopher M; Udelson, James E; Konstam, Marvin A
2015-09-01
Congestion is a primary reason for hospitalization in patients with acute heart failure (AHF). Despite inpatient diuretics and vasodilators targeting decongestion, persistent congestion is present in many AHF patients at discharge and more severe congestion is associated with increased morbidity and mortality. Moreover, hospitalized AHF patients may have renal insufficiency, hyponatremia, or an inadequate response to traditional diuretic therapy despite dose escalation. Current alternative treatment strategies to relieve congestion, such as ultrafiltration, may also result in renal dysfunction to a greater extent than medical therapy in certain AHF populations. Truly novel approaches to volume management would be advantageous to improve dyspnea and clinical outcomes while minimizing the risks of worsening renal function and electrolyte abnormalities. One effective new strategy may be utilization of aquaretic vasopressin antagonists. A member of this class, the oral vasopressin-2 receptor antagonist tolvaptan, provides benefits related to decongestion and symptom relief in AHF patients. Tolvaptan may allow for less intensification of loop diuretic therapy and a lower incidence of worsening renal function during decongestion. In this article, we summarize evidence for decongestion benefits with tolvaptan in AHF and describe the design of the Targeting Acute Congestion With Tolvaptan in Congestive Heart Failure Study (TACTICS) and Study to Evaluate Challenging Responses to Therapy in Congestive Heart Failure (SECRET of CHF) trials. © 2015 American Heart Association, Inc.
Congestion based mechanism for route discovery in a V2I-V2V system applying smart devices and IoT.
Parrado, Natalia; Donoso, Yezid
2015-03-31
The Internet of Things is a new paradigm in which objects in a specific context can be integrated into traditional communication networks to actively participate in solving a determined problem. The Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) technologies are specific cases of IoT and key enablers for Intelligent Transportation Systems (ITS). V2V and V2I have been widely used to solve different problems associated with transportation in cities, in which the most important is traffic congestion. A high percentage of congestion is usually presented by the inappropriate use of resources in vehicular infrastructure. In addition, the integration of traffic congestion in decision making for vehicular traffic is a challenge due to its high dynamic behavior. In this paper, an optimization model over the load balancing in the congestion percentage of the streets is formulated. Later, we explore a fully congestion-oriented route discovery mechanism and we make a proposal on the communication infrastructure that should support it based on V2I and V2V communication. The mechanism is also compared with a modified Dijkstra's approach that reacts at congestion states. Finally, we compare the results of the efficiency of the vehicle's trip with the efficiency in the use of the capacity of the vehicular network.
Limb congestion enhances the synchronization of sympathetic outflow with muscle contraction
NASA Technical Reports Server (NTRS)
Mostoufi-Moab, S.; Herr, M. D.; Silber, D. H.; Gray, K. S.; Leuenberger, U. A.; Sinoway, L. I.
2000-01-01
In this report, we examined if the synchronization of muscle sympathetic nerve activity (MSNA) with muscle contraction is enhanced by limb congestion. To explore this relationship, we applied signal-averaging techniques to the MSNA signal obtained during short bouts of forearm contraction (2-s contraction/3-s rest cycle) at 40% maximal voluntary contraction for 5 min. We performed this analysis before and after forearm venous congestion; an intervention that augments the autonomic response to sustained static muscle contractions via a local effect on muscle afferents. There was an increased percentage of the MSNA noted during second 2 of the 5-s contraction/rest cycles. The percentage of total MSNA seen during this particular second increased from minute 1 to 5 of contraction and was increased further by limb congestion (control minute 1 = 25.6 +/- 2.0%, minute 5 = 32.8 +/- 2.2%; limb congestion minute 1 = 29.3 +/- 2.1%, minute 5 = 37.8 +/- 3.9%; exercise main effect <0.005; limb congestion main effect P = 0.054). These changes in the distribution of signal-averaged MSNA were seen despite the fact that the mean number of sympathetic discharges did not increase over baseline. We conclude that synchronization of contraction and MSNA is seen during short repetitive bouts of handgrip. The sensitizing effect of contraction time and limb congestion are apparently due to feedback from muscle afferents within the exercising muscle.
Congestion Based Mechanism for Route Discovery in a V2I-V2V System Applying Smart Devices and IoT
Parrado, Natalia; Donoso, Yezid
2015-01-01
The Internet of Things is a new paradigm in which objects in a specific context can be integrated into traditional communication networks to actively participate in solving a determined problem. The Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) technologies are specific cases of IoT and key enablers for Intelligent Transportation Systems (ITS). V2V and V2I have been widely used to solve different problems associated with transportation in cities, in which the most important is traffic congestion. A high percentage of congestion is usually presented by the inappropriate use of resources in vehicular infrastructure. In addition, the integration of traffic congestion in decision making for vehicular traffic is a challenge due to its high dynamic behavior. In this paper, an optimization model over the load balancing in the congestion percentage of the streets is formulated. Later, we explore a fully congestion-oriented route discovery mechanism and we make a proposal on the communication infrastructure that should support it based on V2I and V2V communication. The mechanism is also compared with a modified Dijkstra’s approach that reacts at congestion states. Finally, we compare the results of the efficiency of the vehicle’s trip with the efficiency in the use of the capacity of the vehicular network. PMID:25835185
Large-scale transportation network congestion evolution prediction using deep learning theory.
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.
Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation. PMID:25780910
Multi-Scale Visualization Analysis of Bus Flow Average Travel Speed in Qingdao
NASA Astrophysics Data System (ADS)
Yong, HAN; Man, GAO; Xiao-Lei, ZHANG; Jie, LI; Ge, CHEN
2016-11-01
Public transportation is a kind of complex spatiotemporal behaviour. The traffic congestion and environmental pollution caused by the increase in private cars is becoming more and more serious in our city. Spatiotemporal data visualization is an effective tool for studying traffic, transforming non-visual data into recognizable images, which can reveal where/when congestion is formed, developed and disappeared in space and time simultaneously. This paper develops a multi-scale visualization of average travel speed derived from floating bus data, to enable congestion on urban bus networks to be shown and analyzed. The techniques of R language, Echarts, WebGL are used to draw statistical pictures and 3D wall map, which show the congestion in Qingdao from the view of space and time. The results are as follows:(1) There is a more severely delay in Shibei and Shinan areas than Licun and Laoshan areas; (2) The high congestion usually occurs on Hong Kong Middle Road, Shandong Road, Nanjing Road, Liaoyang West Road and Taiping Road;(3) There is a similar law from Monday to Sunday that the congestion is severer in the morning and evening rush hours than other hours; (4) On Monday morning the severity of congestion is higher than on Friday morning, and on Friday evening the severity is higher than on Monday evening. The research results will help to improve the public transportation of Qingdao.
NASA Astrophysics Data System (ADS)
Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.
2010-10-01
Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.
ERIC Educational Resources Information Center
Kaplan, Dave; Clapper, Thomas
2007-01-01
U.S. transportation data suggest that the number of vehicle miles traveled has far surpassed new capacity, resulting in increased traffic congestion in many communities throughout the country. This article reports on traffic congestion around a university campus located within a small town. The mix of trip purposes varies considerably in this…
DOT National Transportation Integrated Search
2008-12-01
Traffic congestion in the Washington, DC area, especially congestion on our freeways, costs our residents every day : in terms of wasted time, fuel, and increased air pollution. Highway studies have determined that once traffic volumes : exceed the c...
Analysis of Non-Uniform Gain for Control of a Deformable Mirror in an Adaptive-Optics System
2008-03-01
Turbulence Estimator SM Path SH WFS – DM Path Figure 3.6: Primary layout. The blue boxed components is representative of the SM path, the red boxed components...layout that was developed for the majority of the experiments conducted. 3.1.5.1 Steering Mirror Path. This path, boxed in blue in Figure 3.6, is used to...Christou, T.S. Duncan, R.J. Eager, M.A. Ealey, B.L. Ellerbroek, R.Q. Fugate , G.W. Jones, R.M. Kuhns, D.J. Lee, W.H. Lowrey, M.D. Oliker, R.E. Ruane
Losartan corrects abnormal frequency response of renal vasculature in congestive heart failure.
DiBona, Gerald F; Sawin, Linda L
2003-11-01
In congestive heart failure, renal blood flow is decreased and renal vascular resistance is increased in a setting of increased activity of both the sympathetic nervous and renin-angiotensin systems. The renal vasoconstrictor response to renal nerve stimulation is enhanced. This is associated with an abnormality in the low-pass filter function of the renal vasculature wherein higher frequencies (> or =0.01 Hz) within renal sympathetic nerve activity are not normally attenuated and are passed into the renal blood flow signal. This study tested the hypothesis that excess angiotensin II action mediates the abnormal frequency response characteristics of the renal vasculature in congestive heart failure. In anesthetized rats, the renal vasoconstrictor response to graded frequency renal nerve stimulation was significantly greater in congestive heart failure than in control rats. Losartan attenuated the renal vasoconstrictor response to a significantly greater degree in congestive heart failure than in control rats. In control rats, the frequency response of the renal vasculature was that of a first order (-20 dB/frequency decade) low-pass filter with a corner frequency (-3 dB, 30% attenuation) of 0.002 Hz and 97% attenuation (-30 dB) at > or =0.1 Hz. In congestive heart failure rats, attenuation did not exceed 45% (-5 dB) over the frequency range of 0.001-0.6 Hz. The frequency response of the renal vasculature was not affected by losartan treatment in control rats but was completely restored to normal by losartan treatment in congestive heart failure rats. The enhanced renal vasoconstrictor response to renal nerve stimulation and the associated abnormality in the frequency response characteristics of the renal vasculature seen in congestive heart failure are mediated by the action of angiotensin II on renal angiotensin II AT1 receptors.
On-board congestion control for satellite packet switching networks
NASA Technical Reports Server (NTRS)
Chu, Pong P.
1991-01-01
It is desirable to incorporate packet switching capability on-board for future communication satellites. Because of the statistical nature of packet communication, incoming traffic fluctuates and may cause congestion. Thus, it is necessary to incorporate a congestion control mechanism as part of the on-board processing to smooth and regulate the bursty traffic. Although there are extensive studies on congestion control for both baseband and broadband terrestrial networks, these schemes are not feasible for space based switching networks because of the unique characteristics of satellite link. Here, we propose a new congestion control method for on-board satellite packet switching. This scheme takes into consideration the long propagation delay in satellite link and takes advantage of the the satellite's broadcasting capability. It divides the control between the ground terminals and satellite, but distributes the primary responsibility to ground terminals and only requires minimal hardware resource on-board satellite.
Traffic congestion and blood pressure elevation: A comparative cross-sectional study in Lebanon.
Bou Samra, Patrick; El Tomb, Paul; Hosni, Mohammad; Kassem, Ahmad; Rizk, Robin; Shayya, Sami; Assaad, Sarah
2017-12-01
This comparative cross-sectional study examines the association between traffic congestion and elevation of systolic and/or diastolic blood pressure levels among a convenience sample of 310 drivers. Data collection took place during a gas station pause at a fixed time of day. Higher average systolic (142 vs 123 mm Hg) and diastolic (87 vs 78 mm Hg) blood pressures were detected among drivers exposed to traffic congestion compared with those who were not exposed (P<.001), while controlling for body mass index, age, sex, pack-year smoking, driving hours per week, and occupational driving. Moreover, among persons exposed to traffic congestion, longer exposure time was associated with higher systolic and diastolic blood pressures. Further studies are needed to better understand the mechanisms of the significant association between elevated blood pressure and traffic congestion. ©2017 Wiley Periodicals, Inc.
Refractive indices used by the Haag-Streit Lenstar to calculate axial biometric dimensions.
Suheimat, Marwan; Verkicharla, Pavan K; Mallen, Edward A H; Rozema, Jos J; Atchison, David A
2015-01-01
To estimate refractive indices used by the Lenstar biometer to translate measured optical path lengths into geometrical path lengths within the eye. Axial lengths of model eyes were determined using the IOLMaster and Lenstar biometers; comparing those lengths gave an overall eye refractive index estimate for the Lenstar. Using the Lenstar Graphical User Interface, we noticed that boundaries between media could be manipulated and opposite changes in optical path lengths on either side of the boundary could be introduced. Those ratios were combined with the overall eye refractive index to estimate separate refractive indices. Furthermore, Haag-Streit provided us with a template to obtain 'air thicknesses' to compare with geometrical distances. The axial length estimates obtained using the IOLMaster and the Lenstar agreed to within 0.01 mm. Estimates of group refractive indices used in the Lenstar were 1.340, 1.341, 1.415, and 1.354 for cornea, aqueous, lens, and overall eye, respectively. Those refractive indices did not match those of schematic eyes, but were close in the cases of aqueous and lens. Linear equations relating air thicknesses to geometrical thicknesses were consistent with our findings. The Lenstar uses different refractive indices for different ocular media. Some of the refractive indices, such as that for the cornea, are not physiological; therefore, it is likely that the calibrations in the instrument correspond to instrument-specific corrections and are not the real optical path lengths. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, A.; Repac, B.; Gonder, J.
This poster presents initial estimates of the net energy impacts of automated vehicles (AVs). Automated vehicle technologies are increasingly recognized as having potential to decrease carbon dioxide emissions and petroleum consumption through mechanisms such as improved efficiency, better routing, lower traffic congestion, and by enabling advanced technologies. However, some effects of AVs could conceivably increase fuel consumption through possible effects such as longer distances traveled, increased use of transportation by underserved groups, and increased travel speeds. The net effect on petroleum use and climate change is still uncertain. To make an aggregate system estimate, we first collect best estimates formore » the energy impacts of approximately ten effects of AVs. We then use a modified Kaya Identity approach to estimate the range of aggregate effects and avoid double counting. We find that depending on numerous factors, there is a wide range of potential energy impacts. Adoption of automated personal or shared vehicles can lead to significant fuel savings but has potential for backfire.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zepf, J.; Rufa, G.
1994-04-01
This paper focuses on the transient performance analysis of the congestion and flow control mechanisms in CCITT Signaling System No. 7 (SS7). Special attention is directed to the impacts of the introduction of intelligent services and new applications, e.g., Freephone, credit card services, user-to-user signaling, etc. In particular, we show that signaling traffic characteristics like signaling scenarios or signaling message length as well as end-to-end signaling capabilities have a significant influence on the congestion and flow control and, therefore, on the real-time signaling performance. One important result of our performance studies is that if, e.g., intelligent services are introduced, themore » SS7 congestion and flow control does not work correctly. To solve this problem, some reinvestigations into these mechanisms would be necessary. Therefore, some approaches, e.g., modification of the Signaling Connection Control Part (SCCP) congestion control, usage of the SCCP relay function, or a redesign of the MTP flow control procedures are discussed in order to guarantee the efficacy of the congestion and flow control mechanisms also in the future. 16 refs.« less
Diuretics as pathogenetic treatment for heart failure
Guglin, Maya
2011-01-01
Increased intracardiac filling pressure or congestion causes symptoms and leads to hospital admissions in patients with heart failure, regardless of their systolic function. A history of hospital admission, in turn, predicts further hospitalizations and morbidity, and a higher number of hospitalizations determine higher mortality. Congestion is therefore the driving force of the natural history of heart failure. Congestion is the syndrome shared by heart failure with preserved and reduced systolic function. These two conditions have almost identical morbidity, mortality, and survival because the outcomes are driven by congestion. A small difference in favor of heart failure with preserved systolic function comes from decreased ejection fraction and left ventricular remodeling which is only present in heart failure with decreased systolic function. The magnitude of this difference reflects the contribution of decreased systolic function and ventricular remodeling to the progression of heart failure. The only treatment available for congestion is fluid removal via diuretics, ultrafiltration, or dialysis. It is the only treatment that works equally well for heart failure with reduced and preserved systolic function because it affects congestion, the main pathogenetic feature of the disease. Diuretics are pathogenetic therapy for heart failure. PMID:21403798
Congestive heart failure in subjects with thyrotoxicosis in a black community
Anakwue, R C; Onwubere, B J C; Anisiuba, B C; Ikeh, V O; Mbah, A; Ike, S O
2010-01-01
Introduction: Thyroid hormone has profound effects on a number of metabolic processes in virtually all tissues but the cardiovascular manifestations are prominent usually creating a hyperdynamic circulatory state. Thyrotoxicosis is not a common cause of congestive heart failure among black communities. Objectives: To determine the hospital prevalence, clinical characteristics and echocardiographic findings in patients with thyrotoxicosis who present with congestive heart failure (CCF) in the eastern part of Nigeria. Subjects and methods: A total of 50 subjects aged 15 years and above who were diagnosed as thyrotoxic following clinical and thyroid function tests were consecutively recruited. Fifty age- and sex-matched controls with no clinical or biochemical evidence of thyrotoxicosis and no comorbidities were used as controls. Two-dimensional echocardiography was carried out on all the subjects. CCF was determined clinically and echocardiographically. Results: Eight patients (5 females and 3 males) out of a total of 50 thyrotoxic patients presented with congestive heart failure. Conclusion: The study revealed that congestive heart failure can occur in thyrotoxicosis in spite of the associated hyperdynamic condition. The underlying mechanism may include direct damage by autoimmune myocarditis, congestive circulation secondary to excess sodium, and fluid retention. PMID:20730063
NASA Technical Reports Server (NTRS)
Jafri, Madiha J.; Ely, Jay J.; Vahala, Linda L.
2007-01-01
In this paper, neural network (NN) modeling is combined with fuzzy logic to estimate Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data. Combining fuzzy logic and NN modeling is shown to improve estimates of measured data over estimates obtained with NN alone. A plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.
NASA Astrophysics Data System (ADS)
Sutton, Virginia Kay
This paper examines statistical issues associated with estimating paths of juvenile salmon through the intakes of Kaplan turbines. Passive sensors, hydrophones, detecting signals from ultrasonic transmitters implanted in individual fish released into the preturbine region were used to obtain the information to estimate fish paths through the intake. Aim and location of the sensors affects the spatial region in which the transmitters can be detected, and formulas relating this region to sensor aiming directions are derived. Cramer-Rao lower bounds for the variance of estimators of fish location are used to optimize placement of each sensor. Finally, a statistical methodology is developed for analyzing angular data collected from optimally placed sensors.
Faizullah, Faiz
2016-01-01
The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.
NASA Astrophysics Data System (ADS)
Lin, XuXun; Yuan, PengCheng
2018-01-01
In this research we consider commuters' dynamic learning effect by modeling the trip mode choice behavior from a new perspective of dynamic evolutionary game theory. We explore the behavior pattern of different types of commuters and study the evolution path and equilibrium properties under different traffic conditions. We further establish a dynamic parking charge optimal control (referred to as DPCOC) model to alter commuters' trip mode choice while minimizing the total social cost. Numerical tests show. (1) Under fixed parking fee policy, the evolutionary results are completely decided by the travel time and the only method for public transit induction is to increase the parking charge price. (2) Compared with fixed parking fee policy, DPCOC policy proposed in this research has several advantages. Firstly, it can effectively turn the evolutionary path and evolutionary stable strategy to a better situation while minimizing the total social cost. Secondly, it can reduce the sensitivity of trip mode choice behavior to traffic congestion and improve the ability to resist interferences and emergencies. Thirdly, it is able to control the private car proportion to a stable state and make the trip behavior more predictable for the transportation management department. The research results can provide theoretical basis and decision-making references for commuters' mode choice prediction, dynamic setting of urban parking charge prices and public transit induction.
Butt, Ahsan Masood; Ismail, Amir; Lawson-Smith, Matthew; Shahid, Muhammad; Webb, Jill; Chester, Darren L
2016-01-01
Leeches are a well-recognized treatment for congested tissue. This study reviewed the efficacy of leech therapy for salvage of venous congested flaps and congested replanted or revascularized hand digits over a 2-year period. All patients treated with leeches between 1 Oct 2010 and 30 Sep 2012 (two years) at Queen Elizabeth Hospital, Birmingham, UK were included in the study. Details regarding mode of injury requiring reconstruction, surgical procedure, leech therapy duration, subsequent surgery requirement and tissue salvage rates were recorded. Twenty tissues in 18 patients required leeches for tissue congestion over 2 years: 13 men and 5 women. The mean patient age was 41 years (range 17-79). The defect requiring reconstruction was trauma in 16 cases, following tumour resection in two, and two miscellaneous causes. Thirteen cases had flap reconstruction and seven digits in six patients had hand digit replantations or revascularisation. Thirteen of 20 cases (65%) had successful tissue salvage following leech therapy for congestion (77% in 10 out of 13 flaps, and 43% in 3 of 7 digits). The rate of tissue salvage in pedicled flaps was good 6/6 (100%) and so was in digital revascularizations 2/3 (67%), but poor in digital re-plants 1/4 (25%) and free flaps 0/2 (0%). Leeches are a helpful tool for congested tissue salvage and in this study, showed a greater survival benefit for pedicled flaps than for free flaps or digital replantations.
Kappagoda, C T; Ravi, K
1989-01-01
1. The effects of plasmapheresis on the responses of rapidly adapting receptors (RARs) and slowly adapting receptors (SARs) of the airways to pulmonary venous congestion were examined in dogs anaesthetized with alpha-chloralose. Pulmonary venous congestion was produced in a graded manner by partial obstruction of the mitral valve sufficient to raise the mean left atrial pressure by 5, 10 and 15 mmHg. Plasmapheresis was performed by withdrawing 10% of blood volume twice. 2. Both RARs (n = 11) and SARs (n = 5) responded to pulmonary venous congestion by increasing their activities. The responses of the former were proportionately greater. 3. After plasmapheresis which reduced the concentration of plasma proteins by 12.3 +/- 1.0%, the responses of the RARs to pulmonary venous congestion were enhanced significantly. There was no significant change in the responses of SARs. 4. In another set of six RARs, the effects of graded pulmonary venous congestion were investigated twice with an interval of 45 min between the two observations. No significant differences were noted between the two responses. 5. Collection of lymph from the tracheobronchial lymph duct (n = 6) showed that after plasmapheresis, there was an increase in the control lymph flow. In addition, the lymph flow was enhanced during pulmonary venous congestion (mean left atrial pressure increased by 10 mmHg). 6. It is suggested that a natural stimulus for the excitation of the RAR is a function of the fluid fluxes in the pulmonary extravascular space. PMID:2607464
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
A Comparison of Hybrid Approaches for Turbofan Engine Gas Path Fault Diagnosis
NASA Astrophysics Data System (ADS)
Lu, Feng; Wang, Yafan; Huang, Jinquan; Wang, Qihang
2016-09-01
A hybrid diagnostic method utilizing Extended Kalman Filter (EKF) and Adaptive Genetic Algorithm (AGA) is presented for performance degradation estimation and sensor anomaly detection of turbofan engine. The EKF is used to estimate engine component performance degradation for gas path fault diagnosis. The AGA is introduced in the integrated architecture and applied for sensor bias detection. The contributions of this work are the comparisons of Kalman Filters (KF)-AGA algorithms and Neural Networks (NN)-AGA algorithms with a unified framework for gas path fault diagnosis. The NN needs to be trained off-line with a large number of prior fault mode data. When new fault mode occurs, estimation accuracy by the NN evidently decreases. However, the application of the Linearized Kalman Filter (LKF) and EKF will not be restricted in such case. The crossover factor and the mutation factor are adapted to the fitness function at each generation in the AGA, and it consumes less time to search for the optimal sensor bias value compared to the Genetic Algorithm (GA). In a word, we conclude that the hybrid EKF-AGA algorithm is the best choice for gas path fault diagnosis of turbofan engine among the algorithms discussed.
Ruiz-Patiño, Alejandro; Acosta-Ospina, Laura Elena; Rueda, Juan-David
2017-04-01
Congestion in the postanesthesia care unit (PACU) leads to the formation of waiting queues for patients being transferred after surgery, negatively affecting hospital resources. As patients recover in the operating room, incoming surgeries are delayed. The purpose of this study was to establish the impact of this phenomenon in multiple settings. An operational mathematical study based on the queuing theory was performed. Average queue length, average queue waiting time, and daily queue waiting time were evaluated. Calculations were based on the mean patient daily flow, PACU length of stay, occupation, and current number of beds. Data was prospectively collected during a period of 2 months, and the entry and exit time was recorded for each patient taken to the PACU. Data was imputed in a computational model made with MS Excel. To account for data uncertainty, deterministic and probabilistic sensitivity analyses for all dependent variables were performed. With a mean patient daily flow of 40.3 and an average PACU length of stay of 4 hours, average total lost surgical opportunity time was estimated at 2.36 hours (95% CI: 0.36-4.74 hours). Cost of opportunity was calculated at $1592 per lost hour. Sensitivity analysis showed that an increase of two beds is required to solve the queue formation. When congestion has a negative impact on cost of opportunity in the surgical setting, queuing analysis grants definitive actions to solve the problem, improving quality of service and resource utilization. Copyright © 2016 Elsevier Inc. All rights reserved.
CFD model simulation of LPG dispersion in urban areas
NASA Astrophysics Data System (ADS)
Pontiggia, Marco; Landucci, Gabriele; Busini, Valentina; Derudi, Marco; Alba, Mario; Scaioni, Marco; Bonvicini, Sarah; Cozzani, Valerio; Rota, Renato
2011-08-01
There is an increasing concern related to the releases of industrial hazardous materials (either toxic or flammable) due to terrorist attacks or accidental events in congested industrial or urban areas. In particular, a reliable estimation of the hazardous cloud footprint as a function of time is required to assist emergency response decision and planning as a primary element of any Decision Support System. Among the various hazardous materials, the hazard due to the road and rail transportation of liquefied petroleum gas (LPG) is well known since large quantities of LPG are commercialized and the rail or road transportation patterns are often close to downtown areas. Since it is well known that the widely-used dispersion models do not account for the effects of any obstacle like buildings, tanks, railcars, or trees, in this paper a CFD model has been applied to simulate the reported consequences of a recent major accident involving an LPG railcar rupture in a congested urban area (Viareggio town, in Italy), showing both the large influence of the obstacles on LPG dispersion as well as the potentials of CFD models to foresee such an influence.
Code of Federal Regulations, 2010 CFR
2010-01-01
... congested area or an open-air assembly of persons. 105.21 Section 105.21 Aeronautics and Space FEDERAL...-air assembly of persons. (a) No person may conduct a parachute operation, and no pilot in command of... congested area of a city, town, or settlement, or an open-air assembly of persons unless a certificate of...
Airport Characterization for the Adaptation of Surface Congestion Management Approaches
2013-02-01
Surface Congestion Management Program at New York JFK airport [6,7], the human-in-the-loop simulations of the Spot and Runway Departure Advisor...a surface congestion management technique at New York JFK airport ,” AIAA Aviation Technology, Integration and Operations (ATIO) Conference...Virginia Beach, VA, September 2011. [7] S. Stroiney, H. Khadilkar and H. Balakrishnan, “Ground Management Program at JFK Airport : Implementation and
Taniguchi, Tatsunori; Ohtani, Tomohito; Kioka, Hidetaka; Tsukamoto, Yasumasa; Onishi, Toshinari; Nakamoto, Kei; Katsimichas, Themistoklis; Sengoku, Kaoruko; Chimura, Misato; Hashimoto, Haruko; Yamaguchi, Osamu; Sawa, Yoshiki; Sakata, Yasushi
2018-01-12
This study sought to investigate whether elevated liver stiffness (LS) values at discharge reflect residual liver congestion and are associated with worse outcomes in patients with heart failure (HF). Transient elastography is a newly developed, noninvasive method for assessing LS, which can be highly reflective of right-sided filling pressure associated with passive liver congestion in patients with HF. LS values were determined for 171 hospitalized patients with HF before discharge using a Fibroscan device. The median LS value was 5.6 kPa (interquartile range: 4.4 to 8.1; range 2.4 to 39.7) and that of right-sided filling pressure, which was estimated based on LS, was 5.7 mm Hg (interquartile range: 4.1 to 8.2 mm Hg; range 0.1 to 18.9 mm Hg). The patients in the highest LS tertile (>6.9 kPa, corresponding to an estimated right-sided filling pressure of >7.1 mm Hg) had advanced New York Heart Association functional class, high prevalence of jugular venous distention and moderate/severe tricuspid regurgitation, large inferior vena cava (IVC) diameter, low hemoglobin and hematocrit levels, high serum direct bilirubin level, and a similar left ventricular ejection fraction compared with the lower tertiles. During follow-up periods (median: 203 days), 8 (5%) deaths and 33 (19%) hospitalizations for HF were observed. The patients in the highest LS group had a significantly higher mortality rate and HF rehospitalization (hazard ratio: 3.57; 95% confidence interval: 1.93 to 6.83; p < 0.001) compared with the other tertiles. Although LS correlated with IVC diameter and serum direct bilirubin and brain natriuretic peptide levels, LS values were predictive of worse outcomes, even after adjustment for these indices. These data suggest that LS is a useful index for assessing systemic volume status and predicting the severity of HF, and that the presence of liver congestion at discharge is associated with worse outcomes in patients with HF. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Robot path planning algorithm based on symbolic tags in dynamic environment
NASA Astrophysics Data System (ADS)
Vokhmintsev, A.; Timchenko, M.; Melnikov, A.; Kozko, A.; Makovetskii, A.
2017-09-01
The present work will propose a new heuristic algorithms for path planning of a mobile robot in an unknown dynamic space that have theoretically approved estimates of computational complexity and are approbated for solving specific applied problems.
Zhao, Yongyu; Bordwell, Frederick G.
1996-09-20
Cleavage of radical anions, HA(*)(-), have been considered to give either H(*) + A(-) (path a) or H(-) + A(*) (path b), and factors determining the preferred mode of cleavage have been discussed. It is conceivable that cleavage to give a proton and a radical dianion, HA(*)(-) right harpoon over left harpoon H(+) + A(*)(2)(-) (path c), might also be feasible. A method, based on a thermodynamic cycle, to estimate the bond dissociation free energy (BDFE) by path c has been devised. Comparison of the BDFEs for cleavage of the radical anions derived from 24 nitroaromatic OH, SH, NH, and CH acids by paths a, b, c has shown that path c is favored thermodynamically.
Performance analysis of SS7 congestion controls under sustained overload
NASA Astrophysics Data System (ADS)
Manfield, David R.; Millsteed, Gregory K.; Zukerman, Moshe
1994-04-01
Congestion controls are a key factor in achieving the robust performance required of common channel signaling (CCS) networks in the face of partial network failures and extreme traffic loads, especially as networks become large and carry high traffic volume. The CCITT recommendations define a number of types of congestion control, and the parameters of the controls must be well set in order to ensure their efficacy under transient and sustained signalling network overload. The objective of this paper is to present a modeling approach to the determination of the network parameters that govern the performance of the SS7 congestion controls under sustained overload. Results of the investigation by simulation are presented and discussed.
Performance Analysis on the Coexistence of Multiple Cognitive Radio Networks
2015-05-28
the scarce spectrum resources. Cognitive radio is a key in minimizing the spectral congestion through its adaptability, where the radio parameters...static allocation of spectrum results in congestion in some parts of the spectrum and non use in some others, therefore, spectra utilization is...well as the secondary user (SU) activities in multiple CR networks. It is shown that the scheduler provided much needed gain during congestions . However
CONGESTIVE HEART FAILURE ASSOCIATED WITH PREGNANCY IN OKAPI (OKAPIA JOHNSTONI).
Warren, Joshua D; Aitken-Palmer, Copper; Weldon, Alan D; Flanagan, Joseph P; Howard, Lauren L; Garner, Michael M; Citino, Scott B
2017-03-01
Acute signs associated with cardiovascular disease occurred in three pregnant okapi ( Okapia johnstoni ) during early to midgestation and progressed to congestive heart failure. Congestive heart failure was diagnosed antemortem using echocardiography and plasma cardiac troponin levels. Clinical signs included decreased activity, hyporexia, tachypnea, dyspnea, flared nostrils, and productive coughing with copious amounts of foamy nasal discharge. Parenteral and oral treatment with furosemide, enalapril, and spironolactone controlled clinical signs in the three okapi allowing each to carry out one pregnancy to term. Two okapi carried the first pregnancy to term after showing signs, while one okapi aborted the first calf and gave birth to a healthy calf in a subsequent pregnancy. Subsequent pregnancy in one okapi ended with abortion and associated dystocia and endometritis. Following parturition, clinical signs associated with heart failure resolved in all three individuals; serial echocardiography in two individuals showed improvement in fractional shortening and left atrial size and all three okapi showed markedly decreased pleural effusion and resolution of pulmonary edema. However, subsequent pregnancies in all three okapi induced respiratory distress and recurrence of congestive heart failure; one okapi died from congestive heart failure associated with subsequent pregnancy. This case series describes the clinical presentation and pathologic findings of congestive heart failure during pregnancy in adult okapi.
Yoshida, Morikatsu; Beppu, Toru; Shiraishi, Shinya; Tsuda, Noriko; Sakamoto, Fumi; Kuramoto, Kunitaka; Okabe, Hirohisa; Nitta, Hidetoshi; Imai, Katsunori; Tomiguchi, Seiji; Baba, Hideo; Yamashita, Yasuyuki
2018-05-01
Background/Aim: The sacrifice of a major hepatic vein can cause hepatic venous congestion (HVC). We evaluated the effects of HVC on regional liver function using the liver uptake value (LUV), that was calculated from 99m Tc-labeled-galactosyl-human-serum-albumin ( 99m Tc-GSA) single-photon emission computed tomography (SPECT) /contrast-enhanced computed tomography (CE-CT) fused images. Patients and Methods: Sixty-two patients underwent 99m Tc-GSA SPECT/CE-CT prior to hepatectomy for liver cancer and at 7 days after surgery were divided into groups with (n=8) and without HVC (n=54). In the HVC group, CT volume (CTv) and LUV were separately calculated in both congested and non-congested areas. Results: The remnant LUV/CTv of the HVC group was significantly smaller than that of the non-HVC group (p<0.01). The mean functional ratio was 0.47±0.05, and all ratios were ≥0.39. Conclusion: After hepatectomy with sacrifice of major hepatic vein, liver function per unit volume in the congested areas was approximately 40% of that in the non-congested areas. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Topical nasal decongestant oxymetazoline (0.05%) provides relief of nasal symptoms for 12 hours.
Druce, H M; Ramsey, D L; Karnati, S; Carr, A N
2018-05-22
Nasal congestion, often referred to as stuffy nose or blocked nose is one of the most prevalent and bothersome symptoms of an upper respiratory tract infection. Oxymetazoline, a widely used intranasal decongestant, offers fast symptom relief, but little is known about the duration of effect. The results of 2 randomized, double-blind, vehicle-controlled, single-dose, parallel, clinical studies (Study 1, n=67; Study 2, n=61) in which the efficacy of an oxymetazoline (0.05% Oxy) nasal spray in patients with acute coryzal rhinitis was assessed over a 12-hour time-period. Data were collected on both subjective relief of nasal congestion (6-point nasal congestion scale) and objective measures of nasal patency (anterior rhinomanometry) in both studies. A pooled study analysis showed statistically significant changes from baseline in subjective nasal congestion for 0.05% oxymetazoline and vehicle at each hourly time-point from Hour 1 through Hour 12 (marginally significant at Hour 11). An objective measure of nasal flow was statistically significant at each time-point up to 12 hours. Adverse events on either treatment were infrequent. The number of subjects who achieved an improvement in subjective nasal congestion scores of at least 1.0 was significantly higher in the Oxy group vs. vehicle at all hourly time-points on a 6-point nasal congestion scale. This study shows for the first time, that oxymetazoline provides both statistically significant and clinically meaningful relief of nasal congestion and improves nasal airflow for up to 12 hours following a single dose.
Brunyé, Tad T; Mahoney, Caroline R; Taylor, Holly A
2015-04-01
When navigating, people tend to overestimate distances when routes contain more turns, termed the route-angularity effect. Three experiments examined the source and generality of this effect. The first two experiments examined whether route-angularity effects occur while viewing maps and might be related to sex differences or sense of direction. The third experiment tested whether the route-angularity effect would occur with stimuli devoid of spatial context, reducing influences of environmental experience and visual complexity. In the three experiments, participants (N=1,552; M=32.2 yr.; 992 men, 560 women) viewed paths plotted on maps (Exps. 1 and 2) or against a blank background (Exp. 3). The depicted paths were always the same overall length, but varied in the number of turns (from 1 to 7) connecting an origin and destination. Participants were asked to estimate the time to traverse each path (Exp. 1) or the length of each path (Exps. 2 and 3). The Santa Barbara Sense of Direction questionnaire was administered to assess whether overall spatial sense of direction would be negatively related to the magnitude of the route-angularity effect. Repeated-measures analyses of variance (ANOVAs) indicated that paths with more turns elicited estimates of greater distance and travel times, whether they were depicted on maps or blank backgrounds. Linear regressions also indicated that these effects were significantly larger in those with a relatively low sense of direction. The results support the route-angularity effect and extend it to paths plotted on map-based stimuli. Furthermore, because the route-angularity effect was shown with paths plotted against blank backgrounds, route-angularity effects are not specific to understanding environments and may arise at the level of visual perception.
NASA Astrophysics Data System (ADS)
Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian
2017-04-01
Flow pattern and seasonal as well as diurnal temperature variations control ecological and biogeochemical conditions in hyporheic sediments. In particular, hyporheic temperatures have a great impact on many microbial processes. In this study we used 3-D coupled water flow and heat transport simulations applying the HydroGeoSphere code in combination with high frequent observations of hydraulic heads and temperatures for quantifying reach scale water and heat flux across the river groundwater interface and hyporheic temperature dynamics of a lowland gravel-bed river. The magnitude and dynamics of simulated temperatures matched the observed with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. Our results highlight that the average temperature in the hyporheic zone follows the temperature in the river which is characterized by distinct seasonal and daily temperature cycles. Individual hyporheic flow path temperature substantially varies around the average hyporheic temperature. Hyporheic flow path temperature was found to strongly depend on the flow path residence time and the temperature gradient between river and groundwater; that is, in winter the average flow path temperature of long flow paths is potentially higher compared to short flow paths. Based on the simulation results we derived a general empirical relationship, estimating the influence of hyporheic flow path residence time on hyporheic flow path temperature. Furthermore we used an empirical temperature relationship between effective temperature and respiration rate to estimate the influence of hyporheic flow path residence time and temperature on hyporheic oxygen consumption. This study highlights the relation between complex hyporheic temperature patterns, hyporheic residence times and their implications on temperature sensitive biogeochemical processes.
2016-05-11
new physically -based prediction models for all-weather path attenuation estimation at Ka, V and W band from multi- channel microwave radiometric data...of new physically -based prediction models for all-weather path attenuation estimation at Ka, V and W band from multi- channel microwave radiometric...the medium behavior at these frequency bands from both a physical and a statistical point of view (e.g., [5]-[7]). However, these campaigns are
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Dishman, W. K.
1982-01-01
A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.
Development of a digital automatic control law for steep glideslope capture and flare
NASA Technical Reports Server (NTRS)
Halyo, N.
1977-01-01
A longitudinal digital guidance and control law for steep glideslopes using MLS (Microwave Landing System) data is developed for CTOL aircraft using modern estimation and control techniques. The control law covers the final approach phases of glideslope capture, glideslope tracking, and flare to touchdown for automatic landings under adverse weather conditions. The control law uses a constant gain Kalman filter to process MLS and body-mounted accelerometer data to form estimates of flight path errors and wind velocities including wind shear. The flight path error estimates and wind estimates are used for feedback in generating control surface commands. Results of a digital simulation of the aircraft dynamics and the guidance and control law are presented for various wind conditions.
Analysis of Aircraft Clusters to Measure Sector-Independent Airspace Congestion
NASA Technical Reports Server (NTRS)
Bilimoria, Karl D.; Lee, Hilda Q.
2005-01-01
The Distributed Air/Ground Traffic Management (DAG-TM) concept of operations* permits appropriately equipped aircraft to conduct Free Maneuvering operations. These independent aircraft have the freedom to optimize their trajectories in real time according to user preferences; however, they also take on the responsibility to separate themselves from other aircraft while conforming to any local Traffic Flow Management (TFM) constraints imposed by the air traffic service provider (ATSP). Examples of local-TFM constraints include temporal constraints such as a required time of arrival (RTA), as well as spatial constraints such as regions of convective weather, special use airspace, and congested airspace. Under current operations, congested airspace typically refers to a sector(s) that cannot accept additional aircraft due to controller workload limitations; hence Dynamic Density (a metric that is indicative of controller workload) can be used to quantify airspace congestion. However, for Free Maneuvering operations under DAG-TM, an additional metric is needed to quantify the airspace congestion problem from the perspective of independent aircraft. Such a metric would enable the ATSP to prevent independent aircraft from entering any local areas of congestion in which the flight deck based systems and procedures may not be able to ensure separation. This new metric, called Gaggle Density, offers the ATSP a mode of control to regulate normal operations and to ensure safety and stability during rare-normal or off-normal situations (e.g., system failures). It may be difficult to certify Free Maneuvering systems for unrestricted operations, but it may be easier to certify systems and procedures for specified levels of Gaggle Density that could be monitored by the ATSP, and maintained through relatively minor flow-rate (RTA type) restrictions. Since flight deck based separation assurance is airspace independent, the challenge is to measure congestion independent of sector boundaries. Figure 1 , reproduced from Ref. 1, depicts an example traffic situation. When the situation is analyzed by sector boundaries (left side of figure), a Dynamic Density metric would identify excessive congestion in the central sector. When the same traffic situation is analyzed independent of sector boundaries (right side of figure), a Gaggle Density metric would identify congestion in two dynamically defined areas covering portions of several sectors. The first step towards measuring airspace-independent congestion is to identify aircraft clusters, i.e., groups of closely spaced aircraft. The objective of this work is to develop techniques to detect and classify clusters of aircraft.
Risk of venous congestion in live donors of extended right liver graft
Radtke, Arnold; Sgourakis, George; Molmenti, Ernesto P; Beckebaum, Susanne; Cicinnati, Vito R; Schmidt, Hartmut; Peitgen, Heinz-Otto; Broelsch, Christoph E; Malagó, Massimo; Schroeder, Tobias
2015-01-01
AIM: To investigate middle hepatic vein (MHV) management in adult living donor liver transplantation and safer remnant volumes (RV). METHODS: There were 59 grafts with and 12 grafts without MHV (including 4 with MHV-5/8 reconstructions). All donors underwent our five-step protocol evaluation containing a preoperative protocol liver biopsy Congestive vs non-congestive RV, remnant-volume-body-weight ratios (RVBWR) and postoperative outcomes were evaluated in 71 right graft living donors. Dominant vs non-dominant MHV anatomy in total liver volume (d-MHV/TLV vs nd-MHV/TLV) was constellated with large/small congestion volumes (CV-index). Small for size (SFS) and non-SFS remnant considerations were based on standard cut-off- RVBWR and RV/TLV. Non-congestive RVBWR was based on non-congestive RV. RESULTS: MHV and non-MHV remnants showed no significant differences in RV, RV/TLV, RVBWR, total bilirubin, or INR. SFS-remnants with RV/TLV < 30% and non-SFS-remnants with RV/TLV ≥ 30% showed no significant differences either. RV and RVBWR for non-MHV (n = 59) and MHV-containing (n = 12) remnants were 550 ± 95 mL and 0.79 ± 0.1 mL vs 568 ± 97 mL and 0.79 ± 0.13, respectively (P = 0.423 and P = 0.919. Mean left RV/TLV was 35.8% ± 3.9%. Non-MHV (n = 59) and MHV-containing (n = 12) remnants (34.1% ± 3% vs 36% ± 4% respectively, P = 0.148. Eight SFS-remnants with RVBWR < 0.65 had a significantly smaller RV/TLV than 63 non-SFS-remnants with RVBWR ≥ 0.65 [SFS: RV/TLV 32.4% (range: 28%-35.7%) vs non-SFS: RV/TLV 36.2% (range: 26.1%-45.5%), P < 0.009. Six SFS-remnants with RV/TLV < 30% had significantly smaller RVBWR than 65 non-SFS-remnants with RV/TLV ≥ 30% (0.65 (range: 0.6-0.7) vs 0.8 (range: 0.6-1.27), P < 0.01. Two (2.8%) donors developed reversible liver failure. RVBWR and RV/TLV were concordant in 25%-33% of SFS and in 92%-94% of non-SFS remnants. MHV management options including complete MHV vs MHV-4A selective retention were necessary in n = 12 vs n = 2 remnants based on particularly risky congestive and non-congestive volume constellations. CONCLUSION: MHV procurement should consider individual remnant congestive- and non-congestive volume components and anatomy characteristics, RVBWR-RV/TLV constellation enables the identification of marginally small remnants. PMID:26019467
Spreading paths in partially observed social networks
NASA Astrophysics Data System (ADS)
Onnela, Jukka-Pekka; Christakis, Nicholas A.
2012-03-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.
Spreading paths in partially observed social networks.
Onnela, Jukka-Pekka; Christakis, Nicholas A
2012-03-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.
vanVonno, Catherine J; Ozminkowski, Ronald J; Smith, Mark W; Thomas, Eileen G; Kelley, Doniece; Goetzel, Ron; Berg, Gregory D; Jain, Susheel K; Walker, David R
2005-12-01
In 1999, the Blue Cross and Blue Shield Federal Employee Program (FEP) implemented a pilot disease management program to manage congestive heart failure (CHF) among members. The purpose of this project was to estimate the financial return on investment in the pilot CHF program, prior to a full program rollout. A cohort of 457 participants from the state of Maryland was matched to a cohort of 803 nonparticipants from a neighboring state where the CHF program was not offered. Each cohort was followed for 12 months before the program began and 12 months afterward. The outcome measures of primary interest were the differences over time in medical care expenditures paid by FEP and by all payers. Independent variables included indicators of program participation, type of heart disease, comorbidity measures, and demographics. From the perspective of the funding organization (FEP), the estimated return on investment for the pilot CHF disease management program was a savings of $1.08 in medical expenditure for every dollar spent on the program. Adding savings to other payers as well, the return on investment was a savings of $1.15 in medical expenditures per dollar spent on the program. The amount of savings depended upon CHF risk levels. The value of a pilot initiative and evaluation is that lessons for larger-scale efforts can be learned prior to full-scale rollout.
MacNeilage, Paul R.; Turner, Amanda H.
2010-01-01
Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843
Temporary Losses of Highway Capacity and Impacts on Performance: Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, S.M.
2004-11-10
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, railroad crossings, large trucks loading/unloading in urban areas, and other factors such as toll collection facilities and sub-optimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-routemore » or reschedule trips. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. In response to this need, Oak Ridge National Laboratory, sponsored by the Federal Highway Administration (FHWA), made an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events (Chin et al. 2002). This study, called the Temporary Loss of Capacity (TLC) study, estimated capacity loss and delay on freeways and principal arterials resulting from fatal and non-fatal crashes, vehicle breakdowns, and adverse weather, including snow, ice, and fog. In addition, it estimated capacity loss and delay caused by sub-optimal signal timing at intersections on principal arterials. It also included rough estimates of capacity loss and delay on Interstates due to highway construction and maintenance work zones. Capacity loss and delay were estimated for calendar year 1999, except for work zone estimates, which were estimated for May 2001 to May 2002 due to data availability limitations. Prior to the first phase of this study, which was completed in May of 2002, no nationwide estimates of temporary losses of highway capacity by type of capacity-reducing event had been made. This report describes the second phase of the TLC study (TLC2). TLC2 improves upon the first study by expanding the scope to include delays from rain, toll collection facilities, railroad crossings, and commercial truck pickup and delivery (PUD) activities in urban areas. It includes estimates of work zone capacity loss and delay for all freeways and principal arterials, rather than for Interstates only. It also includes improved estimates of delays caused by fog, snow, and ice, which are based on data not available during the initial phase of the study. Finally, computational errors involving crash and breakdown delay in the original TLC report are corrected.« less
Involvement of systemic venous congestion in heart failure.
Rubio Gracia, J; Sánchez Marteles, M; Pérez Calvo, J I
2017-04-01
Systemic venous congestion has gained significant importance in the interpretation of the pathophysiology of acute heart failure, especially in the development of renal function impairment during exacerbations. In this study, we review the concept, clinical characterisation and identification of venous congestion. We update current knowledge on its importance in the pathophysiology of acute heart failure and its involvement in the prognosis. We pay special attention to the relationship between abdominal congestion, the pulmonary interstitium as filtering membrane, inflammatory phenomena and renal function impairment in acute heart failure. Lastly, we review decongestion as a new therapeutic objective and the measures available for its assessment. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Medicina Interna (SEMI). All rights reserved.
Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.
2016-01-01
PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584
Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data
Kan, Zihan; Zhang, Xia
2018-01-01
The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles’ mobile activities (MA) and stationary activities (SA). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA, SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles’ activities in road networks. PMID:29561813
Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data.
Kan, Zihan; Tang, Luliang; Kwan, Mei-Po; Zhang, Xia
2018-03-21
The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles' mobile activities ( MA ) and stationary activities ( SA ). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA , SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles' activities in road networks.
Transient Heat Conduction Simulation around Microprocessor Die
NASA Astrophysics Data System (ADS)
Nishi, Koji
This paper explains about fundamental formula of calculating power consumption of CMOS (Complementary Metal-Oxide-Semiconductor) devices and its voltage and temperature dependency, then introduces equation for estimating power consumption of the microprocessor for notebook PC (Personal Computer). The equation is applied to heat conduction simulation with simplified thermal model and evaluates in sub-millisecond time step calculation. In addition, the microprocessor has two major heat conduction paths; one is from the top of the silicon die via thermal solution and the other is from package substrate and pins via PGA (Pin Grid Array) socket. Even though the dominant factor of heat conduction is the former path, the latter path - from package substrate and pins - plays an important role in transient heat conduction behavior. Therefore, this paper tries to focus the path from package substrate and pins, and to investigate more accurate method of estimating heat conduction paths of the microprocessor. Also, cooling performance expression of heatsink fan is one of key points to assure result with practical accuracy, while finer expression requires more computation resources which results in longer computation time. Then, this paper discusses the expression to minimize computation workload with a practical accuracy of the result.
NASA Astrophysics Data System (ADS)
Aarons, J.; Grossi, M. D.
1982-08-01
To develop and operate an adaptive system, propagation factors of the ionospheric medium must be given to the designer. The operation of the system must change as a function of multipath spread, Doppler spread, path losses, channel correlation functions, etc. In addition, NATO mid-latitude HF transmission and transauroral paths require varying system operation, which must fully utilize automatic path diversity across transauroral paths. Current research and literature are reviewed to estimate the extent of the available technical information. Additional investigations to allow designers to orient new systems on realistic models of these parameters are suggested.
Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick
2009-08-01
Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
Pellicori, Pierpaolo; Cleland, John G F; Zhang, Jufen; Kallvikbacka-Bennett, Anna; Urbinati, Alessia; Shah, Parin; Kazmi, Syed; Clark, Andrew L
2016-12-01
Diuretics are the mainstay of treatment for congestion but concerns exist that they adversely affect prognosis. We explored whether the relationship between loop diuretic use and outcome is explained by the underlying severity of congestion amongst patients referred with suspected heart failure. Of 1190 patients, 712 had a left ventricular ejection fraction (LVEF) ≤50 %, 267 had LVEF >50 % with raised plasma NTproBNP (>400 ng/L) and 211 had LVEF >50 % with NTproBNP ≤400 ng/L; respectively, 72 %, 68 % and 37 % of these groups were treated with loop diuretics including 28 %, 29 % and 10 % in doses ≥80 mg furosemide equivalent/day. Compared to patients with cardiac dysfunction (either LVEF ≤50 % or NT-proBNP >400 ng/L) but not taking a loop diuretic, those taking a loop diuretic were older and had more clinical evidence of congestion, renal dysfunction, anaemia and hyponatraemia. During a median follow-up of 934 (IQR: 513-1425) days, 450 patients were hospitalized for HF or died. Patients prescribed loop diuretics had a worse outcome. However, in multi-variable models, clinical, echocardiographic (inferior vena cava diameter), and biochemical (NTproBNP) measures of congestion were strongly associated with an adverse outcome but not the use, or dose, of loop diuretics. Prescription of loop diuretics identifies patients with more advanced features of heart failure and congestion, which may account for their worse prognosis. Further research is needed to clarify the relationship between loop diuretic agents and outcome; imaging and biochemical measures of congestion might be better guides to diuretic dose than symptoms or clinical signs.
Verbrugge, Frederik H; Dupont, Matthias; Steels, Paul; Grieten, Lars; Swennen, Quirine; Tang, W H Wilson; Mullens, Wilfried
2014-02-01
This review discusses renal sodium handling in heart failure. Increased sodium avidity and tendency to extracellular volume overload, i.e. congestion, are hallmark features of the heart failure syndrome. Particularly in the case of concomitant renal dysfunction, the kidneys often fail to elicit potent natriuresis. Yet, assessment of renal function is generally performed by measuring serum creatinine, which has inherent limitations as a biomarker for the glomerular filtration rate (GFR). Moreover, glomerular filtration only represents part of the nephron's function. Alterations in the fractional reabsorptive rate of sodium are at least equally important in emerging therapy-refractory congestion. Indeed, renal blood flow decreases before the GFR is affected in congestive heart failure. The resulting increased filtration fraction changes Starling forces in peritubular capillaries, which drive sodium reabsorption in the proximal tubules. Congestion further stimulates this process by augmenting renal lymph flow. Consequently, fractional sodium reabsorption in the proximal tubules is significantly increased, limiting sodium delivery to the distal nephron. Orthosympathetic activation probably plays a pivotal role in those deranged intrarenal haemodynamics, which ultimately enhance diuretic resistance, stimulate neurohumoral activation with aldosterone breakthrough, and compromise the counter-regulatory function of natriuretic peptides. Recent evidence even suggests that intrinsic renal derangements might impair natriuresis early on, before clinical congestion or neurohumoral activation are evident. This represents a paradigm shift in heart failure pathophysiology, as it suggests that renal dysfunction-although not by conventional GFR measurements-is driving disease progression. In this respect, a better understanding of renal sodium handling in congestive heart failure is crucial to achieve more tailored decongestive therapy, while preserving renal function. © 2013 The Authors. European Journal of Heart Failure © 2013 European Society of Cardiology.
Baert, Anneleen; De Smedt, Delphine; De Sutter, Johan; De Bacquer, Dirk; Puddu, Paolo Emilio; Clays, Els; Pardaens, Sofie
2018-03-01
Background Since improved treatment of congestive heart failure has resulted in decreased mortality and hospitalisation rates, increasing self-perceived health-related quality of life (HRQoL) has become a major goal of congestive heart failure treatment. However, an overview on predictieve factors of HRQoL is currently lacking in literature. Purpose The aim of this study was to identify key factors associated with HRQoL in stable ambulatory patients with congestive heart failure. Methods A systematic review was performed. MEDLINE, Web of Science and Embase were searched for the following combination of terms: heart failure, quality of life, health perception or functional status between the period 2000 and February 2017. Literature screening was done by two independent reviewers. Results Thirty-five studies out of 8374 titles were included for quality appraisal, of which 29 were selected for further data extraction. Four distinct categories grouping different types of variables were identified: socio-demographic characteristics, clinical characteristics, health and health behaviour, and care provider characteristics. Within the above-mentioned categories the presence of depressive symptoms was most consistently related to a worse HRQoL, followed by a higher New York Heart Association functional class, younger age and female gender. Conclusion Through a systematic literature search, factors associated with HRQoL among congestive heart failure patients were investigated. Age, gender, New York Heart Association functional class and depressive symptoms are the most consistent variables explaining the variance in HRQoL in patients with congestive heart failure. These findings are partly in line with previous research on predictors for hard endpoints in patients with congestive heart failure.
ACTS Ka-band Propagation Research in a Spatially Diversified Network with Two USAT Ground Stations
NASA Technical Reports Server (NTRS)
Kalu, Alex; Acousta, R.; Durand, S.; Emrich, Carol; Ventre, G.; Wilson, W.
1999-01-01
Congestion in the radio spectrum below 18 GHz is stimulating greater interest in the Ka (20/30 GHz) frequency band. Transmission at these shorter wavelengths is greatly influenced by rain resulting in signal attenuation and decreased link availability. The size and projected cost of Ultra Small Aperture Terminals (USATS) make site diversity methodology attractive for rain fade compensation. Separation distances between terminals must be small to be of interest commercially. This study measures diversity gain at a separation distance <5 km and investigates utilization of S-band weather radar reflectivity in predicting diversity gain. Two USAT ground stations, separated by 2.43 km for spatial diversity, received a continuous Ka-band tone sent from NASA Glenn Research Center via the Advanced Communications Technology Satellite (ACTS) steerable antenna beam. Received signal power and rainfall were measured, and Weather Surveillance Radar-1998 Doppler (WSR-88D) data were obtained as a measure of precipitation along the USAT-to-ACTS slant path. Signal attenuation was compared for the two sites, and diversity gain was calculated for fades measured on eleven days. Correlation of WSR-88D S-band reflectivity with measured Ka-band attenuation consisted of locating radar volume elements along each slant path, converting reflectivity to Ka-band attenuation with rain rate calculation as an intermediate step. Specific attenuation for each associated path segment was summed, resulting in total attenuation along the slant path. Derived Ka-band attenuation did not correlate closely with empirical data (r = 0.239), but a measured signal fade could be matched with an increase in radar reflectivity in all fade events. Applying a low pass filter to radar reflectivity prior to deriving Ka-band attenuation improved the correlation between measured and derived signal attenuation (r = 0.733). Results indicate that site diversity at small separation distances is a viable means of rain fade compensation, and that existing models underestimate diversity gain for a subtropical climate such as Florida. Also, filtered WSR-88D reflectivity can be used for optimizing diversity terminal placement by comparing derived Ka- band attenuation between the diversity sites.
Randomized shortest-path problems: two related models.
Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh
2009-08-01
This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.
Map Projection Induced Variations in Locations of Polygon Geofence Edges
NASA Technical Reports Server (NTRS)
Neeley, Paula; Narkawicz, Anthony
2017-01-01
This Paper under-estimates answers to the following question under various constraints: If a geofencing algorithm uses a map projection to determine whether a position is inside/outside a polygon region, how far outside/inside the polygon can the point be and the algorithm determine that it is inside/outside (the opposite and therefore incorrect answer)? Geofencing systems for unmanned aircraft systems (UAS) often model stay-in and stay-out regions using 2D polygons with minimum and maximum altitudes. The vertices of the polygons are typically input as latitude-longitude pairs, and the edges as paths between adjacent vertices. There are numerous ways to generate these paths, resulting in numerous potential locations for the edges of stay-in and stay-out regions. These paths may be geodesics on a spherical model of the earth or geodesics on the WGS84 reference ellipsoid. In geofencing applications that use map projections, these paths are inverse images of straight lines in the projected plane. This projected plane may be a projection of a spherical earth model onto a tangent plane, called an orthographic projection. Alternatively, it may be a projection where the straight lines in the projected plane correspond to straight lines in the latitudelongitude coordinate system, also called a Plate Carr´ee projection. This paper estimates distances between different edge paths and an oracle path, which is a geodesic on either the spherical earth or the WGS84 ellipsoidal earth. This paper therefore estimates how far apart different edge paths can be rather than comparing their path lengths, which are not considered. Rather, the comparision is between the actual locations of the edges between vertices. For edges drawn using orthographic projections, this maximum distance increases as the distance from the polygon vertices to the projection point increases. For edges drawn using Plate Carr´ee projections, this maximum distance increases as the vertices become further from the equator. Distances between geodesics on a spherical earth and a WGS84 ellipsoidal earth are also analyzed, using the WGS84 ellipsoid as the oracle. Bounds on the 2D distance between a straight line and a great circle path, in an orthographically projected plane rather than on the surface of the earth, have been formally verified in the PVS theorem prover, meaning that they are mathematically correct in the absence of floating point errors.
Meneguette, Rodolfo I; Filho, Geraldo P R; Guidoni, Daniel L; Pessin, Gustavo; Villas, Leandro A; Ueyama, Jó
2016-01-01
Intelligent Transportation Systems (ITS) rely on Inter-Vehicle Communication (IVC) to streamline the operation of vehicles by managing vehicle traffic, assisting drivers with safety and sharing information, as well as providing appropriate services for passengers. Traffic congestion is an urban mobility problem, which causes stress to drivers and economic losses. In this context, this work proposes a solution for the detection, dissemination and control of congested roads based on inter-vehicle communication, called INCIDEnT. The main goal of the proposed solution is to reduce the average trip time, CO emissions and fuel consumption by allowing motorists to avoid congested roads. The simulation results show that our proposed solution leads to short delays and a low overhead. Moreover, it is efficient with regard to the coverage of the event and the distance to which the information can be propagated. The findings of the investigation show that the proposed solution leads to (i) high hit rate in the classification of the level of congestion, (ii) a reduction in average trip time, (iii) a reduction in fuel consumption, and (iv) reduced CO emissions.
Filho, Geraldo P. R.; Guidoni, Daniel L.; Pessin, Gustavo; Villas, Leandro A.; Ueyama, Jó
2016-01-01
Intelligent Transportation Systems (ITS) rely on Inter-Vehicle Communication (IVC) to streamline the operation of vehicles by managing vehicle traffic, assisting drivers with safety and sharing information, as well as providing appropriate services for passengers. Traffic congestion is an urban mobility problem, which causes stress to drivers and economic losses. In this context, this work proposes a solution for the detection, dissemination and control of congested roads based on inter-vehicle communication, called INCIDEnT. The main goal of the proposed solution is to reduce the average trip time, CO emissions and fuel consumption by allowing motorists to avoid congested roads. The simulation results show that our proposed solution leads to short delays and a low overhead. Moreover, it is efficient with regard to the coverage of the event and the distance to which the information can be propagated. The findings of the investigation show that the proposed solution leads to (i) high hit rate in the classification of the level of congestion, (ii) a reduction in average trip time, (iii) a reduction in fuel consumption, and (iv) reduced CO emissions PMID:27526048
Packet loss mitigation for biomedical signals in healthcare telemetry.
Garudadri, Harinath; Baheti, Pawan K
2009-01-01
In this work, we propose an effective application layer solution for packet loss mitigation in the context of Body Sensor Networks (BSN) and healthcare telemetry. Packet losses occur due to many reasons including excessive path loss, interference from other wireless systems, handoffs, congestion, system loading, etc. A call for action is in order, as packet losses can have extremely adverse impact on many healthcare applications relying on BAN and WAN technologies. Our approach for packet loss mitigation is based on Compressed Sensing (CS), an emerging signal processing concept, wherein significantly fewer sensor measurements than that suggested by Shannon/Nyquist sampling theorem can be used to recover signals with arbitrarily fine resolution. We present simulation results demonstrating graceful degradation of performance with increasing packet loss rate. We also compare the proposed approach with retransmissions. The CS based packet loss mitigation approach was found to maintain up to 99% beat-detection accuracy at packet loss rates of 20%, with a constant latency of less than 2.5 seconds.
I/O Router Placement and Fine-Grained Routing on Titan to Support Spider II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Dillow, David; Oral, H Sarp
2014-01-01
The Oak Ridge Leadership Computing Facility (OLCF) introduced the concept of Fine-Grained Routing in 2008 to improve I/O performance between the Jaguar supercomputer and Spider, OLCF s center-wide Lustre file system. Fine-grained routing organizes I/O paths to minimize congestion. Jaguar has since been upgraded to Titan, providing more than a ten-fold improvement in peak performance. To support the center s increased computational capacity and I/O demand, the Spider file system has been replaced with Spider II. Building on the lessons learned from Spider, an improved method for placing LNET routers was developed and implemented for Spider II. The fine-grained routingmore » scripts and configuration have been updated to provide additional optimizations and better match the system setup. This paper presents a brief history of fine-grained routing at OLCF, an introduction to the architectures of Titan and Spider II, methods for placing routers in Titan, and details about the fine-grained routing configuration.« less
Future trends in commercial and military systems
NASA Astrophysics Data System (ADS)
Bond, F. E.
Commercial and military satellite communication systems are addressed, with a review of current applications and typical communication characteristics of the space and earth segments. Drivers for the development of future commercial systems include: the pervasion of digital techniques and services, growing orbit and frequency congestion, demand for more entertainment, and the large potential market for commercial 'roof-top' service. For military systems, survivability, improved flexibility, and the need for service to small mobile terminals are the principal factors involved. Technical trends include the use of higher frequency bands, multibeam antennas and a significant increase in the application of onboard processing. Military systems will employ a variety of techniques to counter both physical and electronic threats. The use of redundant transmission paths is a particularly effective approach. Successful implementation requires transmission standards to achieve the required interoperability among the pertinent networks. For both the military and commercial sectors, the trend toward larger numbers of terminals and more complex spacecraft is still persisting.
A Dynamic Approach to Rebalancing Bike-Sharing Systems
2018-01-01
Bike-sharing services are flourishing in Smart Cities worldwide. They provide a low-cost and environment-friendly transportation alternative and help reduce traffic congestion. However, these new services are still under development, and several challenges need to be solved. A major problem is the management of rebalancing trucks in order to ensure that bikes and stalls in the docking stations are always available when needed, despite the fluctuations in the service demand. In this work, we propose a dynamic rebalancing strategy that exploits historical data to predict the network conditions and promptly act in case of necessity. We use Birth-Death Processes to model the stations’ occupancy and decide when to redistribute bikes, and graph theory to select the rebalancing path and the stations involved. We validate the proposed framework on the data provided by New York City’s bike-sharing system. The numerical simulations show that a dynamic strategy able to adapt to the fluctuating nature of the network outperforms rebalancing schemes based on a static schedule. PMID:29419771
The Viareggio LPG railway accident: event reconstruction and modeling.
Brambilla, Sara; Manca, Davide
2010-10-15
This manuscript describes in detail the LPG accident occurred in Viareggio on June 2009 and its modeling. The accident investigation highlighted the uncertainty and complexity of assessing and modeling what happened in the congested environment close to the Viareggio railway station. Nonetheless, the analysis allowed comprehending the sequence of events, the way they influenced each other, and the different possible paths/evolutions. The paper describes suitable models for the quantitative assessment of the consequences of the most probable accidental dynamics and its outcomes. The main finding is that after about 80 s from the beginning of the release the dense-gas cloud reached the surrounding houses that were destroyed successively by internal explosions. This fact has two main implications. First, it shows that the adopted modeling framework can give a correct picture of what happened in Viareggio. Second, it confirms the need to develop effective mitigation measures because, in case of this kind of accidents, there is no time to apply any protective emergency plans/actions. 2010 Elsevier B.V. All rights reserved.
Link prediction based on local weighted paths for complex networks
NASA Astrophysics Data System (ADS)
Yao, Yabing; Zhang, Ruisheng; Yang, Fan; Yuan, Yongna; Hu, Rongjing; Zhao, Zhili
As a significant problem in complex networks, link prediction aims to find the missing and future links between two unconnected nodes by estimating the existence likelihood of potential links. It plays an important role in understanding the evolution mechanism of networks and has broad applications in practice. In order to improve prediction performance, a variety of structural similarity-based methods that rely on different topological features have been put forward. As one topological feature, the path information between node pairs is utilized to calculate the node similarity. However, many path-dependent methods neglect the different contributions of paths for a pair of nodes. In this paper, a local weighted path (LWP) index is proposed to differentiate the contributions between paths. The LWP index considers the effect of the link degrees of intermediate links and the connectivity influence of intermediate nodes on paths to quantify the path weight in the prediction procedure. The experimental results on 12 real-world networks show that the LWP index outperforms other seven prediction baselines.
Insomnia Self-Management in Heart Failure
2018-01-05
Cardiac Failure; Heart Failure; Congestive Heart Failure; Heart Failure, Congestive; Sleep Initiation and Maintenance Disorders; Chronic Insomnia; Disorders of Initiating and Maintaining Sleep; Fatigue; Pain; Depressive Symptoms; Sleep Disorders; Anxiety
On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games
NASA Astrophysics Data System (ADS)
de Keijzer, Bart; Schäfer, Guido; Telelis, Orestis A.
We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.
Continuum modeling of cooperative traffic flow dynamics
NASA Astrophysics Data System (ADS)
Ngoduy, D.; Hoogendoorn, S. P.; Liu, R.
2009-07-01
This paper presents a continuum approach to model the dynamics of cooperative traffic flow. The cooperation is defined in our model in a way that the equipped vehicle can issue and receive a warning massage when there is downstream congestion. Upon receiving the warning massage, the (up-stream) equipped vehicle will adapt the current desired speed to the speed at the congested area in order to avoid sharp deceleration when approaching the congestion. To model the dynamics of such cooperative systems, a multi-class gas-kinetic theory is extended to capture the adaptation of the desired speed of the equipped vehicle to the speed at the downstream congested traffic. Numerical simulations are carried out to show the influence of the penetration rate of the equipped vehicles on traffic flow stability and capacity in a freeway.
2012-01-01
Background Characterizing factors which determine susceptibility to air pollution is an important step in understanding the distribution of risk in a population and is critical for setting appropriate policies. We evaluate general and specific measures of community health as modifiers of risk for asthma and congestive heart failure following an episode of acute exposure to wildfire smoke. Methods A population-based study of emergency department visits and daily concentrations of fine particulate matter during a wildfire in North Carolina was performed. Determinants of community health defined by County Health Rankings were evaluated as modifiers of the relative risk. A total of 40 mostly rural counties were included in the study. These rankings measure factors influencing health: health behaviors, access and quality of clinical care, social and economic factors, and physical environment, as well as, the outcomes of health: premature mortality and morbidity. Pollutant concentrations were obtained from a mathematically modeled smoke forecasting system. Estimates of relative risk for emergency department visits were based on Poisson mixed effects regression models applied to daily visit counts. Results For asthma, the strongest association was observed at lag day 0 with excess relative risk of 66%(28,117). For congestive heart failure the excess relative risk was 42%(5,93). The largest difference in risk was observed after stratifying on the basis of Socio-Economic Factors. Difference in risk between bottom and top ranked counties by Socio-Economic Factors was 85% and 124% for asthma and congestive heart failure respectively. Conclusions The results indicate that Socio-Economic Factors should be considered as modifying risk factors in air pollution studies and be evaluated in the assessment of air pollution impacts. PMID:23006928
Population Pharmacokinetics of Fentanyl in the Critically Ill
Choi, Leena; Ferrell, Benjamin A; Vasilevskis, Eduard E; Pandharipande, Pratik P; Heltsley, Rebecca; Ely, E Wesley; Stein, C Michael; Girard, Timothy D
2016-01-01
Objective To characterize fentanyl population pharmacokinetics in patients with critical illness and identify patient characteristics associated with altered fentanyl concentrations. Design Prospective cohort study. Setting Medical and surgical ICUs in a large tertiary care hospital in the United States. Patients Patients with acute respiratory failure and/or shock who received fentanyl during the first five days of their ICU stay. Measurements and Main Results We collected clinical and hourly drug administration data and measured fentanyl concentrations in plasma collected once daily for up to five days after enrollment. Among 337 patients, the mean duration of infusion was 58 hours at a median rate of 100 µg/hr. Using a nonlinear mixed-effects model implemented by NONMEM, we found fentanyl pharmacokinetics were best described by a two-compartment model in which weight, severe liver disease, and congestive heart failure most affected fentanyl concentrations. For a patient population with a mean weight of 92 kg and no history of severe liver disease or congestive heart failure, the final model, which performed well in repeated 10-fold cross-validation, estimated total clearance (CL), intercompartmental clearance (Q), and volumes of distribution for the central (V1) and peripheral compartments (V2) to be 35 (95% confidence interval: 32 to 39) L/hr, 55 (42 to 68) L/hr, 203 (140 to 266) L, and 523 (428 to 618) L, respectively. Severity of illness was marginally associated with fentanyl pharmacokinetics but did not improve the model fit after liver and heart disease were included. Conclusions In this study, fentanyl pharmacokinetics during critical illness were strongly influenced by severe liver disease, congestive heart failure, and weight, factors that should be considered when dosing fentanyl in the ICU. Future studies are needed to determine if data-driven fentanyl dosing algorithms can improve outcomes for ICU patients. PMID:26491862
Rappold, Ana G; Cascio, Wayne E; Kilaru, Vasu J; Stone, Susan L; Neas, Lucas M; Devlin, Robert B; Diaz-Sanchez, David
2012-09-24
Characterizing factors which determine susceptibility to air pollution is an important step in understanding the distribution of risk in a population and is critical for setting appropriate policies. We evaluate general and specific measures of community health as modifiers of risk for asthma and congestive heart failure following an episode of acute exposure to wildfire smoke. A population-based study of emergency department visits and daily concentrations of fine particulate matter during a wildfire in North Carolina was performed. Determinants of community health defined by County Health Rankings were evaluated as modifiers of the relative risk. A total of 40 mostly rural counties were included in the study. These rankings measure factors influencing health: health behaviors, access and quality of clinical care, social and economic factors, and physical environment, as well as, the outcomes of health: premature mortality and morbidity. Pollutant concentrations were obtained from a mathematically modeled smoke forecasting system. Estimates of relative risk for emergency department visits were based on Poisson mixed effects regression models applied to daily visit counts. For asthma, the strongest association was observed at lag day 0 with excess relative risk of 66% (28,117). For congestive heart failure the excess relative risk was 42% (5,93). The largest difference in risk was observed after stratifying on the basis of Socio-Economic Factors. Difference in risk between bottom and top ranked counties by Socio-Economic Factors was 85% and 124% for asthma and congestive heart failure respectively. The results indicate that Socio-Economic Factors should be considered as modifying risk factors in air pollution studies and be evaluated in the assessment of air pollution impacts.
Doughty, R N; Rodgers, A; Sharpe, N; MacMahon, S
1997-04-01
Several randomized trials have reported that beta-blocker therapy improves left ventricular function and reduces the rate of hospitalization in patients with congestive heart failure. However, most trials were individually too small to assess reliably the effects of treatment on mortality. In these circumstances a systematic overview of all trials of beta-blocker therapy in patients with congestive heart failure may provide the most reliable guide to treatment effects. Details were sought from all completed randomized trials of oral beta-blocker therapy in patients with heart failure of any aetiology. In particular, data on mortality were sought from all randomized patients for the scheduled treatment period. The typical effect of treatment on mortality was estimated from an overview in which the results of all individual trials were combined using standard statistical methods. Twenty-four randomized trials, involving 3141 patients with stable congestive heart failure were identified. Complete data on mortality were obtained from all studies, and a total of 297 deaths were documented during an average of 13 months of follow-up. Overall, there was a 31% reduction in the odds of death among patients assigned a beta-blocker (95% confidence interval 11 to 46%, 2P = 0.0035), representing an absolute reduction in mean annual mortality from 9.7% to 7.5%. The effects on mortality of vasodilating beta-blockers (47% reduction SD 15), principally carvedilol, were non-significantly greater (2P = 0.09) than those of standard agents (18% reduction SD 15), principally metoprolol. Beta-blocker therapy is likely to reduce mortality in patients with heart failure. However, large-scale, long-term randomized trials are still required to confirm and quantify more precisely the benefit suggested by this overview.
Hemodynamic and neurochemical determinates of renal function in chronic heart failure.
Gilbert, Cameron; Cherney, David Z I; Parker, Andrea B; Mak, Susanna; Floras, John S; Al-Hesayen, Abdul; Parker, John D
2016-01-15
Abnormal renal function is common in acute and chronic congestive heart failure (CHF) and is related to the severity of congestion. However, treatment of congestion often leads to worsening renal function. Our objective was to explore basal determinants of renal function and their response to hemodynamic interventions. Thirty-seven patients without CHF and 59 patients with chronic CHF (ejection fraction; 23 ± 8%) underwent right heart catheterization, measurements of glomerular filtration rate (GFR; inulin) and renal plasma flow (RPF; para-aminohippurate), and radiotracer estimates of renal sympathetic activity. A subset (26 without, 36 with CHF) underwent acute pharmacological intervention with dobutamine or nitroprusside. We explored the relationship between baseline and drug-induced hemodynamic changes and changes in renal function. In CHF, there was an inverse relationship among right atrial mean pressure (RAM) pressure, RPF, and GFR. By contrast, mean arterial pressure (MAP), cardiac index (CI), and measures of renal sympathetic activity were not significant predictors. In those with CHF there was also an inverse relationship among the drug-induced changes in RAM as well as pulmonary artery mean pressure and the change in GFR. Changes in MAP and CI did not predict the change in GFR in those with CHF. Baseline values and changes in RAM pressure did not correlate with GFR in those without CHF. In the CHF group there was a positive correlation between RAM pressure and renal sympathetic activity. There was also an inverse relationship among RAM pressure, GFR, and RPF in patients with chronic CHF. The observation that acute reductions in RAM pressure is associated with an increase in GFR in patients with CHF has important clinical implications. Copyright © 2016 the American Physiological Society.
Characterizing Longitude-Dependent Orbital Debris Congestion in the Geosynchronous Orbit Regime
NASA Astrophysics Data System (ADS)
Anderson, Paul V.
The geosynchronous orbit (GEO) is a unique commodity of the satellite industry that is becoming increasingly contaminated with orbital debris, but is heavily populated with high-value assets from the civil, commercial, and defense sectors. The GEO arena is home to hundreds of communications, data transmission, and intelligence satellites collectively insured for an estimated 18.3 billion USD. As the lack of natural cleansing mechanisms at the GEO altitude renders the lifetimes of GEO debris essentially infinite, conjunction and risk assessment must be performed to safeguard operational assets from debris collisions. In this thesis, longitude-dependent debris congestion is characterized by predicting the number of near-miss events per day for every longitude slot at GEO, using custom debris propagation tools and a torus intersection metric. Near-miss events with the present-day debris population are assigned risk levels based on GEO-relative position and speed, and this risk information is used to prioritize the population for debris removal target selection. Long-term projections of debris growth under nominal launch traffic, mitigation practices, and fragmentation events are also discussed, and latitudinal synchronization of the GEO debris population is explained via node variations arising from luni-solar gravity. In addition to characterizing localized debris congestion in the GEO ring, this thesis further investigates the conjunction risk to operational satellites or debris removal systems applying low-thrust propulsion to raise orbit altitude at end-of-life to a super-synchronous disposal orbit. Conjunction risks as a function of thrust level, miss distance, longitude, and semi-major axis are evaluated, and a guidance method for evading conjuncting debris with continuous thrust by means of a thrust heading change via single-shooting is developed.
Full velocity difference model for a car-following theory.
Jiang, R; Wu, Q; Zhu, Z
2001-07-01
In this paper, we present a full velocity difference model for a car-following theory based on the previous models in the literature. To our knowledge, the model is an improvement over the previous ones theoretically, because it considers more aspects in car-following process than others. This point is verified by numerical simulation. Then we investigate the property of the model using both analytic and numerical methods, and find that the model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion.
Pseudorange error analysis for precise indoor positioning system
NASA Astrophysics Data System (ADS)
Pola, Marek; Bezoušek, Pavel
2017-05-01
There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.
USDA-ARS?s Scientific Manuscript database
Critical path analysis (CPA) is a method for estimating macroscopic transport coefficients of heterogeneous materials that are highly disordered at the micro-scale. Developed originally to model conduction in semiconductors, numerous researchers have noted that CPA might also have relevance to flow ...
The Returns to Community College
ERIC Educational Resources Information Center
Agan, Amanda Yvonne
2013-01-01
Almost half of postsecondary students are currently enrolled in community colleges. These institutions imply that even amongst students with the same degree outcome there is considerable heterogeneity in the path taken to get there. I estimate the life-cycle private and social returns to the different postsecondary paths and sequential decisions…
Piloting Systems Reset Path Integration Systems during Position Estimation
ERIC Educational Resources Information Center
Zhang, Lei; Mou, Weimin
2017-01-01
During locomotion, individuals can determine their positions with either idiothetic cues from movement (path integration systems) or visual landmarks (piloting systems). This project investigated how these 2 systems interact in determining humans' positions. In 2 experiments, participants studied the locations of 5 target objects and 1 single…
Optimization of educational paths for higher education
NASA Astrophysics Data System (ADS)
Tarasyev, Alexandr A.; Agarkov, Gavriil; Medvedev, Aleksandr
2017-11-01
In our research, we combine the theory of economic behavior and the methodology of increasing efficiency of the human capital to estimate the optimal educational paths. We provide an optimization model for higher education process to analyze possible educational paths for each rational individual. The preferences of each rational individual are compared to the best economically possible educational path. The main factor of the individual choice, which is formed by the formation of optimal educational path, deals with higher salaries level in the chosen economic sector after graduation. Another factor that influences on the economic profit is the reduction of educational costs or the possibility of the budget support for the student. The main outcome of this research consists in correction of the governmental policy of investment in human capital based on the results of educational paths optimal control.
The importance of antipersistence for traffic jams
NASA Astrophysics Data System (ADS)
Krause, Sebastian M.; Habel, Lars; Guhr, Thomas; Schreckenberg, Michael
2017-05-01
Universal characteristics of road networks and traffic patterns can help to forecast and control traffic congestion. The antipersistence of traffic flow time series has been found for many data sets, but its relevance for congestion has been overseen. Based on empirical data from motorways in Germany, we study how antipersistence of traffic flow time-series impacts the duration of traffic congestion on a wide range of time scales. We find a large number of short-lasting traffic jams, which implies a large risk for rear-end collisions.
Congestive cardiomyopathy and endobronchial granulomas as manifestations of Churg-Strauss syndrome.
Alvarez-Sala, R.; Prados, C.; Armada, E.; Del Arco, A.; Villamor, J.
1995-01-01
Churg-Strauss syndrome is a systemic vasculitis. Its most frequent complications are heart diseases and asthma. Usually, cardiological manifestations are pericarditis, cardiac failure and myocardial infarction. Endobronchial granulomas identified by bronchoscopy are unusual. We present the case of a man with congestive cardiomyopathy and endobronchial granulomas macroscopically visible at bronchoscopy. After a review of medical literature, we found one case of congestive cardiomyopathy and no cases of endobronchial granulomas observed by bronchoscopy associated with Churg-Strauss syndrome. Images Figure PMID:7644400
Algorithm and data support of traffic congestion forecasting in the controlled transport
NASA Astrophysics Data System (ADS)
Dmitriev, S. V.
2015-06-01
The topicality of problem of the traffic congestion forecasting in the logistic systems of product movement highways is considered. The concepts: the controlled territory, the highway occupancy by vehicles, the parking and the controlled territory are introduced. Technical realizabilityof organizing the necessary flow of information on the state of the transport system for its regulation has been marked. Sequence of practical implementation of the solution is given. An algorithm for predicting traffic congestion in the controlled transport system is suggested.
Connected vehicle freeway speed harmonization systems.
DOT National Transportation Integrated Search
2016-03-15
The capacity drop phenomenon, which reduces the maximum bottleneck discharge rate following the onset of congestion, is a critical restriction in transportation networks that causes additional traffic congestion. Consequently, preventing or reducing ...
DOT National Transportation Integrated Search
2015-01-01
Speed harmonization is a method to reduce congestion and improve traffic performance. This method is applied at points where lanes merge and form bottlenecks, the greatest cause of congestion nationwide. The strategy involves gradually lowering speed...
Metra, Marco; Cotter, Gad; Senger, Stefanie; Edwards, Christopher; Cleland, John G; Ponikowski, Piotr; Cursack, Guillermo C; Milo, Olga; Teerlink, John R; Givertz, Michael M; O'Connor, Christopher M; Dittrich, Howard C; Bloomfield, Daniel M; Voors, Adriaan A; Davison, Beth A
2018-05-01
The importance of a serum creatinine increase, traditionally considered worsening renal function (WRF), during admission for acute heart failure has been recently debated, with data suggesting an interaction between congestion and creatinine changes. In post hoc analyses, we analyzed the association of WRF with length of hospital stay, 30-day death or cardiovascular/renal readmission and 90-day mortality in the PROTECT study (Placebo-Controlled Randomized Study of the Selective A1 Adenosine Receptor Antagonist Rolofylline for Patients Hospitalized With Acute Decompensated Heart Failure and Volume Overload to Assess Treatment Effect on Congestion and Renal Function). Daily creatinine changes from baseline were categorized as WRF (an increase of 0.3 mg/dL or more) or not. Daily congestion scores were computed by summing scores for orthopnea, edema, and jugular venous pressure. Of the 2033 total patients randomized, 1537 patients had both available at study day 14. Length of hospital stay was longer and 30-day cardiovascular/renal readmission or death more common in patients with WRF. However, these were driven by significant associations in patients with concomitant congestion at the time of assessment of renal function. The mean difference in length of hospital stay because of WRF was 3.51 (95% confidence interval, 1.29-5.73) more days ( P =0.0019), and the hazard ratio for WRF on 30-day death or heart failure hospitalization was 1.49 (95% confidence interval, 1.06-2.09) times higher ( P =0.0205), in significantly congested than nonsignificantly congested patients. A similar trend was observed with 90-day mortality although not statistically significant. In patients admitted for acute heart failure, WRF defined as a creatinine increase of ≥0.3 mg/dL was associated with longer length of hospital stay, and worse 30- and 90-day outcomes. However, effects were largely driven by patients who had residual congestion at the time of renal function assessment. URL: https://www.clinicaltrials.gov. Unique identifiers: NCT00328692 and NCT00354458. © 2018 American Heart Association, Inc.
What is the correct cost functional for variational data assimilation?
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2018-03-01
Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.
Flight Management System Execution of Idle-Thrust Descents in Operations
NASA Technical Reports Server (NTRS)
Stell, Laurel L.
2011-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its error models, commercial flights executed idle-thrust descents, and the recorded data includes the target speed profile and FMS intent trajectories. The FMS computes the intended descent path assuming idle thrust after top of descent (TOD), and any intervention by the controllers that alters the FMS execution of the descent is recorded so that such flights are discarded from the analysis. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location are extracted from the radar data. Using more than 60 descents in Boeing 777 aircraft, the actual speeds are compared to the intended descent speed profile. In addition, three aspects of the accuracy of the FMS intent trajectory are analyzed: the meter fix crossing time, the TOD location, and the altitude at the meter fix. The actual TOD location is within 5 nmi of the intent location for over 95% of the descents. Roughly 90% of the time, the airspeed is within 0.01 of the target Mach number and within 10 KCAS of the target descent CAS, but the meter fix crossing time is only within 50 sec of the time computed by the FMS. Overall, the aircraft seem to be executing the descents as intended by the designers of the onboard automation.
Evaluation of the Terminal Sequencing and Spacing System for Performance Based Navigation Arrivals
NASA Technical Reports Server (NTRS)
Thipphavong, Jane; Jung, Jaewoo; Swenson, Harry N.; Martin, Lynne; Lin, Melody; Nguyen, Jimmy
2013-01-01
NASA has developed the Terminal Sequencing and Spacing (TSS) system, a suite of advanced arrival management technologies combining timebased scheduling and controller precision spacing tools. TSS is a ground-based controller automation tool that facilitates sequencing and merging arrivals that have both current standard ATC routes and terminal Performance-Based Navigation (PBN) routes, especially during highly congested demand periods. In collaboration with the FAA and MITRE's Center for Advanced Aviation System Development (CAASD), TSS system performance was evaluated in human-in-the-loop (HITL) simulations with currently active controllers as participants. Traffic scenarios had mixed Area Navigation (RNAV) and Required Navigation Performance (RNP) equipage, where the more advanced RNP-equipped aircraft had preferential treatment with a shorter approach option. Simulation results indicate the TSS system achieved benefits by enabling PBN, while maintaining high throughput rates-10% above baseline demand levels. Flight path predictability improved, where path deviation was reduced by 2 NM on average and variance in the downwind leg length was 75% less. Arrivals flew more fuel-efficient descents for longer, spending an average of 39 seconds less in step-down level altitude segments. Self-reported controller workload was reduced, with statistically significant differences at the p less than 0.01 level. The RNP-equipped arrivals were also able to more frequently capitalize on the benefits of being "Best-Equipped, Best- Served" (BEBS), where less vectoring was needed and nearly all RNP approaches were conducted without interruption.
Calibration of TOPEX/POSEIDON at Platform Harvest
NASA Technical Reports Server (NTRS)
Christensen, E. J.; Haines, B. J.; Keihm, S. J.; Morris, C. S.; Norman, R. A.; Purcell, G. H.; Williams, B. G.; Wilson, B. D.; Born, G. H.; Parke, M. E.
1994-01-01
We present estimates for the mean bias of the TOPEX/POSEIDON NASA altimeter (ALT) and the Centre National d'Etudes Spatiales altimeter (SSALT) using in-situ data gathered at Platform Harvest during the first 36 cycles of the mission. Data for 21 overflights of the ALT and six overflights of the SSALT have been analyzed. The analysis includes an independent assessment of in-situ measurements of sea level, the radial component of the orbit, wet tropospheric path delay, and ionospheric path delay. (The sign convention used is such that, to correct the geophysical data record values for sea level, add the bias algebraically. Unless otherwise stated, the uncertainty in a given parameter is depicted by +/- sigma(sub x), where sigma(sub x) is the sample standard deviation of x about the mean.) Tide gauges at Harvest provide estimates of sea level with an uncertainty of +/- 1.5 cm. The uncertainty in the radial component of the orbit is estimated to be +/- 1.3 cm. In-situ measurements of tropopsheric path delay at Harvest compare to within +/- 1.3 cm of the TOPEX/POSEIDON microwave radiometer, and in-situ measurements of the ionospheric path delay compare to within -0.4 +/- 0.7 cm of the dual-frequency ALT and 1.1 +/- 0.6 cm of Doppler orbitography and radiopositioning integrated by satellite. We obtain mean bias estimates of -14.5 +/- 2.9 cm for the ALT and +0.9 +/- 3.1 cm for the SSALT (where the uncertainties are based on the standard deviation of the estimated mean (sigma(sub bar x/y), which is derived from sample statistics and estimates for errors that cannot be observed). These results are consistent with independent estimates for the relative bias between the two altimeters. A linear regression applied to the complete set of data shows that there is a discernable secular trend in the time series for the ALT bias estimates. A preliminary analysis of data obtained through cycle 48 suggests that the apparent secular drift may be the result of a poorly sampled annual signal.
Loose, Irene; Winkel, Matthias
2004-01-01
It was the aim of this clinical study to demonstrate the efficacy of 1000 mg acetylsalicylic acid (ASA, CAS 50-78-2) in combination with 60 mg pseudoephedrine (PSE, CAS 90-82-4), compared with placebo, in the symptomatic treatment of nasal congestion associated with the common cold. A further aim was to demonstrate the efficacy of 500 mg ASA + 30 mg PSE and of 1000 mg paracetamol (CAS 103-90-2) + 60 mg PSE (active control) in the symptomatic treatment of nasal congestion. The study was designed as a randomized, two-center, double-blind, double-dummy, placebo-controlled, parallel-group, single-dose efficacy and safety trial over 6 h and was carried out in the USA. In total, at two centers, 643 patients who had a history and diagnosis of acute upper respiratory tract infection (URTI), were included; they showed symptoms such as nasal congestion, scratchy/sore throat, headache, generalized muscle ache, earache, runny nose, fever, sneezing etc. The investigational drugs ASA and PSE were both provided as granules in sachets and the granules were dissolved in water before administration; the combined preparation of paracetamol + PSE was administered as commercially available tablets encapsulated for blinding. For all preparations, matching placebos were provided. The primary efficacy variable was the area under the curve for differences from baseline on a nasal congestion scale in the first 2 h after treatment. To be eligible for the study, otherwise healthy volunteers were to present with nasal stuffiness of recent onset that reached a score of at least 6 on the 11-point scale for nasal congestion (0 = not stuffy, 10 = very stuffy). The primary analysis of the primary efficacy variable was calculated by analysis of variance including treatment group, severity (moderate/severe) and center as main strata. The analysis was performed using the intent-to-treat population. All active treatments proved to be statistically significantly superior to placebo with regard to the primary efficacy variable. Significant superiority of active treatment compared with placebo could also be demonstrated for an interval of up to 6 h after intake of the drug and for the relief of nasal congestion. The lower dose did not reveal significant different results compared with placebo for relief of nasal congestion in patients with a severe nasal congestion score at baseline. As well in patients with moderate nasal congestion score (NCS) at start of the study the difference from baseline in the NCS compared with placebo was not statistically significant. Thus a trend towards better efficacy in the higher dose could be assumed. No difference was found between 1000 mg ASA + 60 mg PSE and the active control. There were no differences between the two centers. The treatment proved to be safe and well tolerated, without relevant differences between the four treatment groups. Main adverse events were found to be related to the upper respiratory tract infection or were of gastrointestinal nature. In conclusion, the combination of ASA with PSE can be considered as an effective and safe remedial for the symptomatic treatment of the nasal congestion during URTI.
Communication assisted Localization and Navigation for Networked Robots
2005-09-01
developments such as the Mica Mote [23, 24] and the single chip called “Spec” [1] along the path to the ultimate goal of smart dust. Other technologies...path or a path defining a grid , broadcasting GPS coordinates. The sensors incrementally pro- cess all broadcasts they receive to refine their estimated...RAM, 4K EEPROM), a 916 MHz RF transceiver (50Kbits/sec, nominal 30m range), a UART and a 4Mbit serial flash. A Mote runs for approximately one month on
NASA Astrophysics Data System (ADS)
Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.; Scaglione, John M.
2018-03-01
This work presents a generalized muon trajectory estimation algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguard verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstruction algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS is explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm's precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm root mean square (RMS) for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. The effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.
Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.; ...
2018-03-28
Here, this work presents a generalized muon trajectory estimation (GMTE) algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguards verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstructionmore » algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS are explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm’s precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm RMS for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. Finally, the effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.
Here, this work presents a generalized muon trajectory estimation (GMTE) algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguards verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstructionmore » algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS are explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm’s precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm RMS for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. Finally, the effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.« less
Hisinger-Mölkänen, Hanna; Piirilä, Päivi; Haahtela, Tari; Sovijärvi, Anssi; Pallasaho, Paula
2018-01-01
Allergic and non-allergic rhinitis cause a lot of symptoms in everyday life. To decrease the burden more information of the preventable risk factors is needed. We assessed prevalence and risk factors for chronic nasal symptoms, exploring the effects of smoking, environmental tobacco smoke, exposure to occupational irritants, and their combinations. In 2016, a postal survey was conducted among a random population sample of 8000 adults in Helsinki, Finland with a 50.5% response rate. Smoking was associated with a significant increase in occurrence of chronic rhinitis (longstanding nasal congestion or runny nose), but not with self-reported or physician diagnosed allergic rhinitis. The highest prevalence estimates of nasal symptoms, 55.1% for chronic rhinitis, 49.1% for nasal congestion, and 40.7% for runny nose, were found among smokers with occupational exposure to gases, fumes or dusts.Besides active smoking, also exposure to environmental tobacco smoke combined with occupational exposure increased the risk of nasal symptoms. Smoking, environmental tobacco smoke, and occupational irritants are significant risk factors for nasal symptoms with an additive pattern. The findings suggest that these factors should be systematically inquired in patients with nasal symptoms for appropriate preventive measures. (192 words).
Health effects of people living close to a petrochemical industrial estate in Thailand.
Kongtip, Pornpimol; Singkaew, Panawadee; Yoosook, Witaya; Chantanakul, Suttinun; Sujiratat, Dusit
2013-12-01
An acute health effect of people living near the petrochemical industrial estate in Thailand was assessed using a panel study design. The populations in communities near the petrochemical industrial estates were recruited. The daily air pollutant concentrations, daily percentage of respiratory and other health symptoms reported were collected for 63 days. The effect of air pollutants to reported symptoms of people were estimated by adjusted odds ratios and 95% confidence interval using binary logistic regression. The significant associations were found with the adjusted odds ratios of 38.01 for wheezing, 18.63 for shortness of breath, 4.30 for eye irritation and 3.58 for dizziness for total volatile organic compounds (Total VOCs). The adjusted odds ratio for carbon monoxide (CO2) was 7.71 for cough, 4.55 for eye irritation and 3.53 for weakness and the adjusted odds ratio for ozone (O3) was 1.02 for nose congestion, sore throat and 1.05 for phlegm. The results showed that the people living near petrochemical industrial estate had acute adverse health effects, shortness of breath, eye irritation, dizziness, cough, nose congestion, sore throat, phlegm and weakness from exposure to industrial air pollutants.
Real-time adaptive aircraft scheduling
NASA Technical Reports Server (NTRS)
Kolitz, Stephan E.; Terrab, Mostafa
1990-01-01
One of the most important functions of any air traffic management system is the assignment of ground-holding times to flights, i.e., the determination of whether and by how much the take-off of a particular aircraft headed for a congested part of the air traffic control (ATC) system should be postponed in order to reduce the likelihood and extent of airborne delays. An analysis is presented for the fundamental case in which flights from many destinations must be scheduled for arrival at a single congested airport; the formulation is also useful in scheduling the landing of airborne flights within the extended terminal area. A set of approaches is described for addressing a deterministic and a probabilistic version of this problem. For the deterministic case, where airport capacities are known and fixed, several models were developed with associated low-order polynomial-time algorithms. For general delay cost functions, these algorithms find an optimal solution. Under a particular natural assumption regarding the delay cost function, an extremely fast (O(n ln n)) algorithm was developed. For the probabilistic case, using an estimated probability distribution of airport capacities, a model was developed with an associated low-order polynomial-time heuristic algorithm with useful properties.
Land use and traffic congestion.
DOT National Transportation Integrated Search
2012-03-01
"The study investigated the link between land use, travel behavior, and traffic congestion. Popular wisdom : suggests that higher-density development patterns may be beneficial in reducing private vehicle dependency : and use, which if true, could ho...
Guidelines for operating congested traffic signals.
DOT National Transportation Integrated Search
2010-08-01
The objective of this project was to develop guidelines for mitigating congestion in traffic signal systems. As : part of the project, researchers conducted a thorough review of literature and developed preliminary : guidelines for combating congesti...
I-55 integrated diversion traffic management benefit study.
DOT National Transportation Integrated Search
2014-11-01
Traffic congestion, recurrent and non-recurrent, creates significant economic losses and environmental impacts. : Integrated Corridor Management (ICM) is a U.S.DOT research initiative that has been proven to effectively relieve : recurrent congestion...
DOT National Transportation Integrated Search
1994-07-22
This study pursued five objectives. The first objective was the development of a study data base. Second, the study provided a general understanding of congestion. This understanding should include techniques for defining and measuring congestion as ...
Congestion information surveillance system
DOT National Transportation Integrated Search
1992-10-14
This document advances a proposal to implement a Congestion Information Surveillance System (CISS)for the Albuquerque metropolitan area. The CISS will offer agencies in the region responsible for transportation and air quality management a new capabi...
Modeling truck traffic volume growth congestion.
DOT National Transportation Integrated Search
2009-05-01
Modeling of the statewide transportation system is an important element in understanding issues and programming of funds to thwart potential congestion. As Alabama grows its manufacturing economy, the number of heavy vehicles traversing its highways ...
Measuring and improving performance in incident management.
DOT National Transportation Integrated Search
2010-03-01
Traffic incidents account for about 25 percent of traffic congestion and delay. Clearing incidents rapidly is crucial in : minimizing congestion, reducing secondary crashes and improving safety for both emergency responders and : travelers. Especiall...
Congestion Management System Process Report
DOT National Transportation Integrated Search
1996-03-01
In January 1995, the Indianapolis Metropolitan Planning Organization with the help of an interagency Study Review Committee began the process of developing a Congestion Management System (CMS) Plan resulting in this report. This report documents the ...