Joint Resource Optimization for Cognitive Sensor Networks with SWIPT-Enabled Relay.
Lu, Weidang; Lin, Yuanrong; Peng, Hong; Nan, Tian; Liu, Xin
2017-09-13
Energy-constrained wireless networks, such as wireless sensor networks (WSNs), are usually powered by fixed energy supplies (e.g., batteries), which limits the operation time of networks. Simultaneous wireless information and power transfer (SWIPT) is a promising technique to prolong the lifetime of energy-constrained wireless networks. This paper investigates the performance of an underlay cognitive sensor network (CSN) with SWIPT-enabled relay node. In the CSN, the amplify-and-forward (AF) relay sensor node harvests energy from the ambient radio-frequency (RF) signals using power splitting-based relaying (PSR) protocol. Then, it helps forward the signal of source sensor node (SSN) to the destination sensor node (DSN) by using the harvested energy. We study the joint resource optimization including the transmit power and power splitting ratio to maximize CSN's achievable rate with the constraint that the interference caused by the CSN to the primary users (PUs) is within the permissible threshold. Simulation results show that the performance of our proposed joint resource optimization can be significantly improved.
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks
Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng
2017-01-01
In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.
Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng
2017-07-04
In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.
Ding, Xu; Han, Jianghong; Shi, Lei
2015-01-01
In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating “bottleneck” sensor nodes is also developed in this paper. PMID:25785305
Ding, Xu; Han, Jianghong; Shi, Lei
2015-03-16
In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating "bottleneck" sensor nodes is also developed in this paper.
Wireless Energy Harvesting Two-Way Relay Networks with Hardware Impairments.
Peng, Chunling; Li, Fangwei; Liu, Huaping
2017-11-13
This paper considers a wireless energy harvesting two-way relay (TWR) network where the relay has energy-harvesting abilities and the effects of practical hardware impairments are taken into consideration. In particular, power splitting (PS) receiver is adopted at relay to harvests the power it needs for relaying the information between the source nodes from the signals transmitted by the source nodes, and hardware impairments is assumed suffered by each node. We analyze the effect of hardware impairments [-20]on both decode-and-forward (DF) relaying and amplify-and-forward (AF) relaying networks. By utilizing the obtained new expressions of signal-to-noise-plus-distortion ratios, the exact analytical expressions of the achievable sum rate and ergodic capacities for both DF and AF relaying protocols are derived. Additionally, the optimal power splitting (OPS) ratio that maximizes the instantaneous achievable sum rate is formulated and solved for both protocols. The performances of DF and AF protocols are evaluated via numerical results, which also show the effects of various network parameters on the system performance and on the OPS ratio design.
Use of DAGMan in CRAB3 to Improve the Splitting of CMS User Jobs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, M.; Mascheroni, M.; Woodard, A.
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distributemore » the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.« less
Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs
NASA Astrophysics Data System (ADS)
Wolf, M.; Mascheroni, M.; Woodard, A.; Belforte, S.; Bockelman, B.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Liu, Yiqi; Tu, Xiaohu; Xu, Qin; Bai, Chenxiao; Kong, Chuixing; Liu, Qi; Yu, Jiahui; Peng, Qiangqiang; Zhou, Xiangshan; Zhang, Yuanxing; Cai, Menghao
2018-01-01
As a promising one-carbon renewable substrate for industrial biotechnology, methanol has attracted much attention. However, engineering of microorganisms for industrial production of pharmaceuticals using a methanol substrate is still in infancy. In this study, the methylotrophic yeast Pichia pastoris was used to produce anti-hypercholesterolemia pharmaceuticals, lovastatin and its precursor monacolin J, from methanol. The biosynthetic pathways for monacolin J and lovastatin were first assembled and optimized in single strains using single copies of the relevant biosynthetic genes, and yields of 60.0mg/L monacolin J and 14.4mg/L lovastatin were obtained using methanol following pH controlled monoculture. To overcome limitations imposed by accumulation of intermediates and metabolic stress in monoculture, approaches using pathway splitting and co-culture were developed. Two pathway splitting strategies for monacolin J, and four for lovastatin were tested at different metabolic nodes. Biosynthesis of monacolin J and lovastatin was improved by 55% and 71%, respectively, when the upstream and downstream modules were separately accommodated in two different fluorescent strains, split at the metabolic node of dihydromonacolin L. However, pathway distribution at monacolin J blocked lovastatin biosynthesis in all designs, mainly due to its limited ability of crossing cellular membranes. Bioreactor fermentations were tested for the optimal co-culture strategies, and yields of 593.9mg/L monacolin J and 250.8mg/L lovastatin were achieved. This study provides an alternative method for production of monacolin J and lovastatin and reveals the potential of a methylotrophic yeast to produce complicated pharmaceuticals from methanol. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Split Node and Stress Glut Methods for Dynamic Rupture Simulations in Finite Elements.
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Bielak, J.
2008-12-01
I present two numerical techniques to solve the Dynamic problem. I revisit and modify the Split Node approach and introduce a Stress Glut type Method. Both algorithms are implemented using a iso/sub- parametric FEM solver. In the first case, I discuss the formulation and perform an analysis of convergence for different orders of approximation for the acoustic case. I describe the algorithm of the second methodology as well as the assumptions made. The key to the new technique is to have an accurate representation of the traction. Thus, I devote part of the discussion to analyze the tractions for a simple example. The sensitivity of the method is tested by comparing against Split Node solutions.
Perumal, Madhumathy; Dhandapani, Sivakumar
2015-01-01
Data gathering and optimal path selection for wireless sensor networks (WSN) using existing protocols result in collision. Increase in collision further increases the possibility of packet drop. Thus there is a necessity to eliminate collision during data aggregation. Increasing the efficiency is the need of the hour with maximum security. This paper is an effort to come up with a reliable and energy efficient WSN routing and secure protocol with minimum delay. This technique is named as relay node based secure routing protocol for multiple mobile sink (RSRPMS). This protocol finds the rendezvous point for optimal transmission of data using a "splitting tree" technique in tree-shaped network topology and then to determine all the subsequent positions of a sink the "Biased Random Walk" model is used. In case of an event, the sink gathers the data from all sources, when they are in the sensing range of rendezvous point. Otherwise relay node is selected from its neighbor to transfer packets from rendezvous point to sink. A symmetric key cryptography is used for secure transmission. The proposed relay node based secure routing protocol for multiple mobile sink (RSRPMS) is experimented and simulation results are compared with Intelligent Agent-Based Routing (IAR) protocol to prove that there is increase in the network lifetime compared with other routing protocols.
A hybrid 3D spatial access method based on quadtrees and R-trees for globe data
NASA Astrophysics Data System (ADS)
Gong, Jun; Ke, Shengnan; Li, Xiaomin; Qi, Shuhua
2009-10-01
3D spatial access method for globe data is very crucial technique for virtual earth. This paper presents a brand-new maintenance method to index 3d objects distributed on the whole surface of the earth, which integrates the 1:1,000,000- scale topographic map tiles, Quad-tree and R-tree. Furthermore, when traditional methods are extended into 3d space, the performance of spatial index deteriorates badly, for example 3D R-tree. In order to effectively solve this difficult problem, a new algorithm of dynamic R-tree is put forward, which includes two sub-procedures, namely node-choosing and node-split. In the node-choosing algorithm, a new strategy is adopted, not like the traditional mode which is from top to bottom, but firstly from bottom to top then from top to bottom. This strategy can effectively solve the negative influence of node overlap. In the node-split algorithm, 2-to-3 split mode substitutes the traditional 1-to-2 mode, which can better concern the shape and size of nodes. Because of the rational tree shape, this R-tree method can easily integrate the concept of LOD. Therefore, it will be later implemented in commercial DBMS and adopted in time-crucial 3d GIS system.
A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid
NASA Astrophysics Data System (ADS)
Sulaimanov, Z. M.; Shumilov, B. M.
2017-10-01
For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Kareem Jebur, Tuka
2018-05-01
Mobile ad hoc networks (MANETs) play a critical role in today’s wireless ad hoc network research and consist of active nodes that can be in motion freely. Because it consider very important problem in this network, we suggested proposed method based on modified radial basis function networks RBFN and Self-Organizing Map SOM. These networks can be improved by the use of clusters because of huge congestion in the whole network. In such a system, the performance of MANET is improved by splitting the whole network into various clusters using SOM. The performance of clustering is improved by the cluster head selection and number of clusters. Modified Radial Based Neural Network is very simple, adaptable and efficient method to increase the life time of nodes, packet delivery ratio and the throughput of the network will increase and connection become more useful because the optimal path has the best parameters from other paths including the best bitrate and best life link with minimum delays. Proposed routing algorithm depends on the group of factors and parameters to select the path between two points in the wireless network. The SOM clustering average time (1-10 msec for stall nodes) and (8-75 msec for mobile nodes). While the routing time range (92-510 msec).The proposed system is faster than the Dijkstra by 150-300%, and faster from the RBFNN (without modify) by 145-180%.
Association between split selection instability and predictive error in survival trees.
Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T
2006-01-01
To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.
The stopping rules for winsorized tree
NASA Astrophysics Data System (ADS)
Ch'ng, Chee Keong; Mahat, Nor Idayu
2017-11-01
Winsorized tree is a modified tree-based classifier that is able to investigate and to handle all outliers in all nodes along the process of constructing the tree. It overcomes the tedious process of constructing a classical tree where the splitting of branches and pruning go concurrently so that the constructed tree would not grow bushy. This mechanism is controlled by the proposed algorithm. In winsorized tree, data are screened for identifying outlier. If outlier is detected, the value is neutralized using winsorize approach. Both outlier identification and value neutralization are executed recursively in every node until predetermined stopping criterion is met. The aim of this paper is to search for significant stopping criterion to stop the tree from further splitting before overfitting. The result obtained from the conducted experiment on pima indian dataset proved that the node could produce the final successor nodes (leaves) when it has achieved the range of 70% in information gain.
Delayed grafting for banked skin graft in lymph node flap transfer.
Ciudad, Pedro; Date, Shivprasad; Orfaniotis, Georgios; Dower, Rory; Nicoli, Fabio; Maruccia, Michele; Lin, Shu-Ping; Chuang, Chu-Yi; Chuang, Tsan-Yu; Wang, Gou-Jen; Chen, Hung-Chi
2017-02-01
Over the last decade, lymph node flap (LNF) transfer has turned out to be an effective method in the management of lymphoedema of extremities. Most of the time, the pockets created for LNF cannot be closed primarily and need to be resurfaced with split thickness skin grafts. Partial graft loss was frequently noted in these cases. The need to prevent graft loss on these iatrogenic wounds made us explore the possibility of attempting delayed skin grafting. We have herein reported our experience with delayed grafting with autologous banked split skin grafts in cases of LNF transfer for lymphoedema of the extremities. Ten patients with International Society of Lymphology stage II-III lymphoedema of upper or lower extremity were included in this study over an 8-month period. All patients were thoroughly evaluated and subjected to lymph node flap transfer. The split skin graft was harvested and banked at the donor site, avoiding immediate resurfacing over the flap. The same was carried out in an aseptic manner as a bedside procedure after confirming flap viability and allowing flap swelling to subside. Patients were followed up to evaluate long-term outcomes. Flap survival was 100%. Successful delayed skin grafting was done between the 4th and 6th post-operative day as a bedside procedure under local anaesthesia. The split thickness skin grafts (STSG) takes more than 97%. One patient needed additional medications during the bedside procedure. All patients had minimal post-operative pain and skin graft requirement. The patients were also reported to be satisfied with the final aesthetic results. There were no complications related to either the skin grafts or donor sites during the entire period of follow-up. Delayed split skin grafting is a reliable method of resurfacing lymph node flaps and has been shown to reduce the possibility of flap complications as well as the operative time and costs. © 2016 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
C-fuzzy variable-branch decision tree with storage and classification error rate constraints
NASA Astrophysics Data System (ADS)
Yang, Shiueng-Bien
2009-10-01
The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.
NASA Technical Reports Server (NTRS)
Bakuckas, J. G.; Tan, T. M.; Lau, A. C. W.; Awerbuch, J.
1993-01-01
A finite element-based numerical technique has been developed to simulate damage growth in unidirectional composites. This technique incorporates elastic-plastic analysis, micromechanics analysis, failure criteria, and a node splitting and node force relaxation algorithm to create crack surfaces. Any combination of fiber and matrix properties can be used. One of the salient features of this technique is that damage growth can be simulated without pre-specifying a crack path. In addition, multiple damage mechanisms in the forms of matrix cracking, fiber breakage, fiber-matrix debonding and plastic deformation are capable of occurring simultaneously. The prevailing failure mechanism and the damage (crack) growth direction are dictated by the instantaneous near-tip stress and strain fields. Once the failure mechanism and crack direction are determined, the crack is advanced via the node splitting and node force relaxation algorithm. Simulations of the damage growth process in center-slit boron/aluminum and silicon carbide/titanium unidirectional specimens were performed. The simulation results agreed quite well with the experimental observations.
Performance Optimization of Priority Assisted CSMA/CA Mechanism of 802.15.6 under Saturation Regime
Shakir, Mustafa; Rehman, Obaid Ur; Rahim, Mudassir; Alrajeh, Nabil; Khan, Zahoor Ali; Khan, Mahmood Ashraf; Niaz, Iftikhar Azim; Javaid, Nadeem
2016-01-01
Due to the recent development in the field of Wireless Sensor Networks (WSNs), the Wireless Body Area Networks (WBANs) have become a major area of interest for the developers and researchers. Human body exhibits postural mobility due to which distance variation occurs and the status of connections amongst sensors change time to time. One of the major requirements of WBAN is to prolong the network lifetime without compromising on other performance measures, i.e., delay, throughput and bandwidth efficiency. Node prioritization is one of the possible solutions to obtain optimum performance in WBAN. IEEE 802.15.6 CSMA/CA standard splits the nodes with different user priorities based on Contention Window (CW) size. Smaller CW size is assigned to higher priority nodes. This standard helps to reduce delay, however, it is not energy efficient. In this paper, we propose a hybrid node prioritization scheme based on IEEE 802.15.6 CSMA/CA to reduce energy consumption and maximize network lifetime. In this scheme, optimum performance is achieved by node prioritization based on CW size as well as power in respective user priority. Our proposed scheme reduces the average back off time for channel access due to CW based prioritization. Additionally, power based prioritization for a respective user priority helps to minimize required number of retransmissions. Furthermore, we also compare our scheme with IEEE 802.15.6 CSMA/CA standard (CW assisted node prioritization) and power assisted node prioritization under postural mobility in WBAN. Mathematical expressions are derived to determine the accurate analytical model for throughput, delay, bandwidth efficiency, energy consumption and life time for each node prioritization scheme. With the intention of analytical model validation, we have performed the simulations in OMNET++/MIXIM framework. Analytical and simulation results show that our proposed hybrid node prioritization scheme outperforms other node prioritization schemes in terms of average network delay, average throughput, average bandwidth efficiency and network lifetime. PMID:27598167
Nash, A A; Ashford, N P
1982-01-01
Mice simultaneously injected intravenously and subcutaneously with herpes simplex virus fail to adoptively transfer delayed hypersensitivity (DH) to syngeneic recipients. The transferred lymph node cells also failed to rapidly eliminate infectious herpes from the pinna, despite the presence of cytotoxic T cells in the transferred suspension. Both primary and secondary cytotoxic cell responses in the draining lymph node were unaffected by the inhibition of DH. The lymph nodes from DH tolerized mice also contain lymphocytes capable of undergoing a proliferative response in vitro to herpes antigens. In addition, a neutralizing antibody response with IgG antibodies against herpes are also present in DH tolerized mice. These data suggest a form of split T-cell tolerance in which only DH responses are directly compromised. The implication of these findings for the pathogenesis of herpes simplex virus is discussed. PMID:6279490
System for solving diagnosis and hitting set problems
NASA Technical Reports Server (NTRS)
Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)
2007-01-01
The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-03-20
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-01-01
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
Structured pedigree information for distributed fusion systems
NASA Astrophysics Data System (ADS)
Arambel, Pablo O.
2008-04-01
One of the most critical challenges in distributed data fusion is the avoidance of information double counting (also called "data incest" or "rumor propagation"). This occurs when a node in a network incorporates information into an estimate - e.g. the position of an object - and the estimate is injected into the network. Other nodes fuse this estimate with their own estimates, and continue to propagate estimates through the network. When the first node receives a fused estimate from the network, it does not know if it already contains its own contributions or not. Since the correlation between its own estimate and the estimate received from the network is not known, the node can not fuse the estimates in an optimal way. If it assumes that both estimates are independent from each other, it unknowingly double counts the information that has already being used to obtain the two estimates. This leads to overoptimistic error covariance matrices. If the double-counting is not kept under control, it may lead to serious performance degradation. Double counting can be avoided by propagating uniquely tagged raw measurements; however, that forces each node to process all the measurements and precludes the propagation of derived information. Another approach is to fuse the information using the Covariance Intersection (CI) equations, which maintain consistent estimates irrespective of the cross-correlation among estimates. However, CI does not exploit pedigree information of any kind. In this paper we present an approach that propagates multiple covariance matrices, one for each uncorrelated source in the network. This is a way to compress the pedigree information and avoids the need to propagate raw measurements. The approach uses a generalized version of the Split CI to fuse different estimates with appropriate weights to guarantee the consistency of the estimates.
Joint Optimal Placement and Energy Allocation of Underwater Sensors in a Tree Topology
2014-03-10
underwater acoustic sensor nodes with respect to the capacity of the wireless links between the... underwater acoustic sensor nodes with respect to the capacity of the wireless links between the nodes. We assumed that the energy consumption of...nodes’ optimal placements. We achieve the optimal placement of the underwater acoustic sensor nodes with respect to the capacity of the wireless
Efficient implementation of MrBayes on multi-GPU.
Bao, Jie; Xia, Hongju; Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang
2013-06-01
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.
A universal hybrid decision tree classifier design for human activity classification.
Chien, Chieh; Pottie, Gregory J
2012-01-01
A system that reliably classifies daily life activities can contribute to more effective and economical treatments for patients with chronic conditions or undergoing rehabilitative therapy. We propose a universal hybrid decision tree classifier for this purpose. The tree classifier can flexibly implement different decision rules at its internal nodes, and can be adapted from a population-based model when supplemented by training data for individuals. The system was tested using seven subjects each monitored by 14 triaxial accelerometers. Each subject performed fourteen different activities typical of daily life. Using leave-one-out cross validation, our decision tree produced average classification accuracies of 89.9%. In contrast, the MATLAB personalized tree classifiers using Gini's diversity index as the split criterion followed by optimally tuning the thresholds for each subject yielded 69.2%.
Scialpi, Michele; Schiavone, Raffaele; D'Andrea, Alfredo; Palumbo, Isabella; Magli, Michelle; Gravante, Sabrina; Falcone, Giuseppe; De Filippi, Claudio; Manganaro, Lucia; Palumbo, Barbara
2015-05-01
To evaluate the image quality and the diagnostic efficacy by single-phase whole-body 64-slice multidetector CT (MDCT) for pediatric oncology. Chest-abdomen-pelvis CT examinations with single-phase split-bolus technique were evaluated for T: detection and delineation of primary tumor (assessment of the extent of the lesion to neighboring tissues), N: regional lymph nodes and M: distant metastasis. Quality scores (5-point scale) were assessed by two radiologists on parenchymal and vascular enhancement. Accurate TNM staging in term of detection and delineation of primary tumor, regional lymph nodes and distant metastasis was obtained in all cases. On the image quality and severity artifact, the Kappa value for the interobserver agreement measure obtained from the analysis was 0.754, (p<0.001), characterizing a very good agreement between observers. Single-pass total body CT split-bolus technique reached the highest overall image quality and an accurate TNM staging in pediatric patients with cancer. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Guimarães, Dayan Adionel; Sakai, Lucas Jun; Alberti, Antonio Marcos; de Souza, Rausley Adriano Amaral
2016-01-01
In this paper, a simple and flexible method for increasing the lifetime of fixed or mobile wireless sensor networks is proposed. Based on past residual energy information reported by the sensor nodes, the sink node or another central node dynamically optimizes the communication activity levels of the sensor nodes to save energy without sacrificing the data throughput. The activity levels are defined to represent portions of time or time-frequency slots in a frame, during which the sensor nodes are scheduled to communicate with the sink node to report sensory measurements. Besides node mobility, it is considered that sensors’ batteries may be recharged via a wireless power transmission or equivalent energy harvesting scheme, bringing to the optimization problem an even more dynamic character. We report large increased lifetimes over the non-optimized network and comparable or even larger lifetime improvements with respect to an idealized greedy algorithm that uses both the real-time channel state and the residual energy information. PMID:27657075
Guimarães, Dayan Adionel; Sakai, Lucas Jun; Alberti, Antonio Marcos; de Souza, Rausley Adriano Amaral
2016-09-20
In this paper, a simple and flexible method for increasing the lifetime of fixed or mobile wireless sensor networks is proposed. Based on past residual energy information reported by the sensor nodes, the sink node or another central node dynamically optimizes the communication activity levels of the sensor nodes to save energy without sacrificing the data throughput. The activity levels are defined to represent portions of time or time-frequency slots in a frame, during which the sensor nodes are scheduled to communicate with the sink node to report sensory measurements. Besides node mobility, it is considered that sensors' batteries may be recharged via a wireless power transmission or equivalent energy harvesting scheme, bringing to the optimization problem an even more dynamic character. We report large increased lifetimes over the non-optimized network and comparable or even larger lifetime improvements with respect to an idealized greedy algorithm that uses both the real-time channel state and the residual energy information.
Computer-Assisted Traffic Engineering Using Assignment, Optimal Signal Setting, and Modal Split
DOT National Transportation Integrated Search
1978-05-01
Methods of traffic assignment, traffic signal setting, and modal split analysis are combined in a set of computer-assisted traffic engineering programs. The system optimization and user optimization traffic assignments are described. Travel time func...
Abuassba, Adnan O M; Zhang, Dezheng; Luo, Xiong; Shaheryar, Ahmad; Ali, Hazrat
2017-01-01
Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L 2 -norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.
Abuassba, Adnan O. M.; Ali, Hazrat
2017-01-01
Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets. PMID:28546808
The optimal community detection of software based on complex networks
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong
2016-02-01
The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
Intelligent control and cooperation for mobile robots
NASA Astrophysics Data System (ADS)
Stingu, Petru Emanuel
The topic discussed in this work addresses the current research being conducted at the Automation & Robotics Research Institute in the areas of UAV quadrotor control and heterogenous multi-vehicle cooperation. Autonomy can be successfully achieved by a robot under the following conditions: the robot has to be able to acquire knowledge about the environment and itself, and it also has to be able to reason under uncertainty. The control system must react quickly to immediate challenges, but also has to slowly adapt and improve based on accumulated knowledge. The major contribution of this work is the transfer of the ADP algorithms from the purely theoretical environment to the complex real-world robotic platforms that work in real-time and in uncontrolled environments. Many solutions are adopted from those present in nature because they have been proven to be close to optimal in very different settings. For the control of a single platform, reinforcement learning algorithms are used to design suboptimal controllers for a class of complex systems that can be conceptually split in local loops with simpler dynamics and relatively weak coupling to the rest of the system. Optimality is enforced by having a global critic but the curse of dimensionality is avoided by using local actors and intelligent pre-processing of the information used for learning the optimal controllers. The system model is used for constructing the structure of the control system, but on top of that the adaptive neural networks that form the actors use the knowledge acquired during normal operation to get closer to optimal control. In real-world experiments, efficient learning is a strong requirement for success. This is accomplished by using an approximation of the system model to focus the learning for equivalent configurations of the state space. Due to the availability of only local data for training, neural networks with local activation functions are implemented. For the control of a formation of robots subjected to dynamic communication constraints, game theory is used in addition to reinforcement learning. The nodes maintain an extra set of state variables about all the other nodes that they can communicate to. The more important are trust and predictability. They are a way to incorporate knowledge acquired in the past into the control decisions taken by each node. The trust variable provides a simple mechanism for the implementation of reinforcement learning. For robot formations, potential field based control algorithms are used to generate the control commands. The formation structure changes due to the environment and due to the decisions of the nodes. It is a problem of building a graph and coalitions by having distributed decisions but still reaching an optimal behavior globally.
Process techniques of charge transfer time reduction for high speed CMOS image sensors
NASA Astrophysics Data System (ADS)
Zhongxiang, Cao; Quanliang, Li; Ye, Han; Qi, Qin; Peng, Feng; Liyuan, Liu; Nanjian, Wu
2014-11-01
This paper proposes pixel process techniques to reduce the charge transfer time in high speed CMOS image sensors. These techniques increase the lateral conductivity of the photo-generated carriers in a pinned photodiode (PPD) and the voltage difference between the PPD and the floating diffusion (FD) node by controlling and optimizing the N doping concentration in the PPD and the threshold voltage of the reset transistor, respectively. The techniques shorten the charge transfer time from the PPD diode to the FD node effectively. The proposed process techniques do not need extra masks and do not cause harm to the fill factor. A sub array of 32 × 64 pixels was designed and implemented in the 0.18 μm CIS process with five implantation conditions splitting the N region in the PPD. The simulation and measured results demonstrate that the charge transfer time can be decreased by using the proposed techniques. Comparing the charge transfer time of the pixel with the different implantation conditions of the N region, the charge transfer time of 0.32 μs is achieved and 31% of image lag was reduced by using the proposed process techniques.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
Multicasting based optical inverse multiplexing in elastic optical network.
Guo, Bingli; Xu, Yingying; Zhu, Paikun; Zhong, Yucheng; Chen, Yuanxiang; Li, Juhao; Chen, Zhangyuan; He, Yongqi
2014-06-16
Optical multicasting based inverse multiplexing (IM) is introduced in spectrum allocation of elastic optical network to resolve the spectrum fragmentation problem, where superchannels could be split and fit into several discrete spectrum blocks in the intermediate node. We experimentally demonstrate it with a 1-to-7 optical superchannel multicasting module and selecting/coupling components. Also, simulation results show that, comparing with several emerging spectrum defragmentation solutions (e.g., spectrum conversion, split spectrum), IM could reduce blocking performance significantly but without adding too much system complexity as split spectrum. On the other hand, service fairness for traffic with different granularity of these schemes is investigated for the first time and it shows that IM performs better than spectrum conversion and almost as well as split spectrum, especially for smaller size traffic under light traffic intensity.
Generalized field-splitting algorithms for optimal IMRT delivery efficiency.
Kamath, Srijit; Sahni, Sartaj; Li, Jonathan; Ranka, Sanjay; Palta, Jatinder
2007-09-21
Intensity-modulated radiation therapy (IMRT) uses radiation beams of varying intensities to deliver varying doses of radiation to different areas of the tissue. The use of IMRT has allowed the delivery of higher doses of radiation to the tumor and lower doses to the surrounding healthy tissue. It is not uncommon for head and neck tumors, for example, to have large treatment widths that are not deliverable using a single field. In such cases, the intensity matrix generated by the optimizer needs to be split into two or three matrices, each of which may be delivered using a single field. Existing field-splitting algorithms used the pre-specified arbitrary split line or region where the intensity matrix is split along a column, i.e., all rows of the matrix are split along the same column (with or without the overlapping of split fields, i.e., feathering). If three fields result, then the two splits are along the same two columns for all rows. In this paper we study the problem of splitting a large field into two or three subfields with the field width as the only constraint, allowing for an arbitrary overlap of the split fields, so that the total MU efficiency of delivering the split fields is maximized. Proof of optimality is provided for the proposed algorithm. An average decrease of 18.8% is found in the total MUs when compared to the split generated by a commercial treatment planning system and that of 10% is found in the total MUs when compared to the split generated by our previously published algorithm.
Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, James S.; Leverman, Dustin B.; Hanley, Jesse A.
This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality ofmore » both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.« less
Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing
2017-07-19
Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.
2013-04-01
To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less
NASA Astrophysics Data System (ADS)
Huré, J.-M.; Hersant, F.
2017-02-01
We compute the structure of a self-gravitating torus with polytropic equation of state (EOS) rotating in an imposed centrifugal potential. The Poisson solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at second order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (I.e. polytropic index n ≥ 1), the underlying second order is naturally recovered for boundary values and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, virial parameter, etc.), I.e. errors vary with the number N of nodes per direction as ˜1/N2. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (I.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS (n < 1), due to infinite mass density gradients at the fluid surface. We fix the problem by using kernel splitting. Finally, we propose an accelerated version of the self-consistent field (SCF) algorithm based on a node-by-node pre-conditioning of the mass density at each step. The computing time is reduced by a factor of 2 typically, regardless of the polytropic index. There is a priori no obstacle to applying these results and techniques to ellipsoidal configurations and even to 3D configurations.
Directed Diffusion Modelling for Tesso Nilo National Parks Case Study
NASA Astrophysics Data System (ADS)
Yasri, Indra; Safrianti, Ery
2018-01-01
— Directed Diffusion (DD has ability to achieve energy efficiency in Wireless Sensor Network (WSN). This paper proposes Directed Diffusion (DD) modelling for Tesso Nilo National Parks (TNNP) case study. There are 4 stages of scenarios involved in this modelling. It’s started by appointing of sampling area through GPS coordinate. The sampling area is determined by optimization processes from 500m x 500m up to 1000m x 1000m with 100m increment in between. The next stage is sensor node placement. Sensor node is distributed in sampling area with three different quantities i.e. 20 nodes, 30 nodes and 40 nodes. One of those quantities is choose as an optimized sensor node placement. The third stage is to implement all scenarios in stages 1 and stages 2 on DD modelling. In the last stage, the evaluation process to achieve most energy efficient in the combination of optimized sampling area and optimized sensor node placement on Direct Diffusion (DD) routing protocol. The result shows combination between sampling area 500m x 500m and 20 nodes able to achieve energy efficient to support a forest preventive fire system at Tesso Nilo National Parks.
A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.
Li, Yuhong; Gong, Guanghong; Li, Ni
2018-01-01
In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.
Observation of Landau quantization and standing waves in HfSiS
NASA Astrophysics Data System (ADS)
Jiao, L.; Xu, Q. N.; Qi, Y. P.; Wu, S.-C.; Sun, Y.; Felser, C.; Wirth, S.
2018-05-01
Recently, HfSiS was found to be a new type of Dirac semimetal with a line of Dirac nodes in the band structure. Meanwhile, Rashba-split surface states are also pronounced in this compound. Here we report a systematic study of HfSiS by scanning tunneling microscopy/spectroscopy at low temperature and high magnetic field. The Rashba-split surface states are characterized by measuring Landau quantization and standing waves, which reveal a quasilinear dispersive band structure. First-principles calculations based on density-functional theory are conducted and compared with the experimental results. Based on these investigations, the properties of the Rashba-split surface states and their interplay with defects and collective modes are discussed.
High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems
NASA Technical Reports Server (NTRS)
Kolano, Paul Z.; Ciotti, Robert B.
2012-01-01
Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Opinion formation on adaptive networks with intensive average degree
NASA Astrophysics Data System (ADS)
Schmittmann, B.; Mukhopadhyay, Abhishek
2010-12-01
We study the evolution of binary opinions on a simple adaptive network of N nodes. At each time step, a randomly selected node updates its state (“opinion”) according to the majority opinion of the nodes that it is linked to; subsequently, all links are reassigned with probability p˜ (q˜) if they connect nodes with equal (opposite) opinions. In contrast to earlier work, we ensure that the average connectivity (“degree”) of each node is independent of the system size (“intensive”), by choosing p˜ and q˜ to be of O(1/N) . Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. We find two absorbing states, characterized by perfect consensus, and one metastable state, characterized by a population split evenly between the two opinions. The relaxation time of this state grows exponentially with the number of nodes, N . A second metastable state, found in the earlier studies, is no longer observed.
Optimal Deployment of Sensor Nodes Based on Performance Surface of Underwater Acoustic Communication
Choi, Jee Woong
2017-01-01
The underwater acoustic sensor network (UWASN) is a system that exchanges data between numerous sensor nodes deployed in the sea. The UWASN uses an underwater acoustic communication technique to exchange data. Therefore, it is important to design a robust system that will function even in severely fluctuating underwater communication conditions, along with variations in the ocean environment. In this paper, a new algorithm to find the optimal deployment positions of underwater sensor nodes is proposed. The algorithm uses the communication performance surface, which is a map showing the underwater acoustic communication performance of a targeted area. A virtual force-particle swarm optimization algorithm is then used as an optimization technique to find the optimal deployment positions of the sensor nodes, using the performance surface information to estimate the communication radii of the sensor nodes in each generation. The algorithm is evaluated by comparing simulation results between two different seasons (summer and winter) for an area located off the eastern coast of Korea as the selected targeted area. PMID:29053569
On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier
2013-01-01
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582
On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier
2013-08-09
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.
Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing
2017-01-01
Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962
Performing a scatterv operation on a hierarchical tree network optimized for collective operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D
Performing a scatterv operation on a hierarchical tree network optimized for collective operations including receiving, by the scatterv module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scatterv module installed on the node, the portion of the data for the node; determining, by the scatterv module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; andmore » sending, by the scatterv module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child.« less
Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.
2015-01-01
In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687
Diffusion Characteristics of Upwind Schemes on Unstructured Triangulations
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1998-01-01
The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and dimensionally-split finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the dimensionally-split finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2-3 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a 20-25 speedup over finite volume.
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
TreeCmp: Comparison of Trees in Polynomial Time
Bogdanowicz, Damian; Giaro, Krzysztof; Wróbel, Borys
2012-01-01
When a phylogenetic reconstruction does not result in one tree but in several, tree metrics permit finding out how far the reconstructed trees are from one another. They also permit to assess the accuracy of a reconstruction if a true tree is known. TreeCmp implements eight metrics that can be calculated in polynomial time for arbitrary (not only bifurcating) trees: four for unrooted (Matching Split metric, which we have recently proposed, Robinson-Foulds, Path Difference, Quartet) and four for rooted trees (Matching Cluster, Robinson-Foulds cluster, Nodal Splitted and Triple). TreeCmp is the first implementation of Matching Split/Cluster metrics and the first efficient and convenient implementation of Nodal Splitted. It allows to compare relatively large trees. We provide an example of the application of TreeCmp to compare the accuracy of ten approaches to phylogenetic reconstruction with trees up to 5000 external nodes, using a measure of accuracy based on normalized similarity between trees.
In Vitro Plant Regeneration from Commercial Cultivars of Soybean
Raza, Ghulam; Singh, Mohan B.
2017-01-01
Soybean, a major legume crop, is the source of vegetable oil and protein. There is a need for transgenic approaches to breeding superior soybean varieties to meet future climate challenges. Efficient plant regeneration is a prerequisite for successful application of genetic transformation technology. Soybean cultivars are classified into different maturity groups based on photoperiod requirements. In this study, nine soybean varieties belonging to different maturity group were regenerated successfully from three different explants: half split hypocotyl, complete hypocotyl, and cotyledonary node. All the genotypes and explant types responded by producing adventitious shoots. Shoot induction potential ranged within 60–87%, 50–100%, and 75–100%, and regeneration rate ranged within 4.2–10, 2.7–4.2, and 2.6–10.5 shoots per explant using half split hypocotyl, complete hypocotyl, and cotyledonary explants, respectively, among all the tested genotypes. Bunya variety showed the best regeneration response using half split and complete hypocotyl explants and the PNR791 with cotyledonary node. The regenerated shoots were successfully rooted and acclimatized to glasshouse conditions. This study shows that commercial varieties of soybean are amenable to shoot regeneration with high regeneration frequencies and could be exploited for genetic transformation. Further, our results show no correlation between shoots regeneration capacity with the maturity grouping of the soybean cultivars tested. PMID:28691031
In Vitro Plant Regeneration from Commercial Cultivars of Soybean.
Raza, Ghulam; Singh, Mohan B; Bhalla, Prem L
2017-01-01
Soybean, a major legume crop, is the source of vegetable oil and protein. There is a need for transgenic approaches to breeding superior soybean varieties to meet future climate challenges. Efficient plant regeneration is a prerequisite for successful application of genetic transformation technology. Soybean cultivars are classified into different maturity groups based on photoperiod requirements. In this study, nine soybean varieties belonging to different maturity group were regenerated successfully from three different explants: half split hypocotyl, complete hypocotyl, and cotyledonary node. All the genotypes and explant types responded by producing adventitious shoots. Shoot induction potential ranged within 60-87%, 50-100%, and 75-100%, and regeneration rate ranged within 4.2-10, 2.7-4.2, and 2.6-10.5 shoots per explant using half split hypocotyl, complete hypocotyl, and cotyledonary explants, respectively, among all the tested genotypes. Bunya variety showed the best regeneration response using half split and complete hypocotyl explants and the PNR791 with cotyledonary node. The regenerated shoots were successfully rooted and acclimatized to glasshouse conditions. This study shows that commercial varieties of soybean are amenable to shoot regeneration with high regeneration frequencies and could be exploited for genetic transformation. Further, our results show no correlation between shoots regeneration capacity with the maturity grouping of the soybean cultivars tested.
A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
1996-01-01
Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.
Bunch Splitting Simulations for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satogata, Todd J.; Gamage, Randika
2016-05-01
We describe the bunch splitting strategies for the proposed JLEIC ion collider ring at Jefferson Lab. This complex requires an unprecedented 9:6832 bunch splitting, performed in several stages. We outline the problem and current results, optimized with ESME including general parameterization of 1:2 bunch splitting for JLEIC parameters.
Split and Splice Approach for Highly Selective Targeting of Human NSCLC Tumors
2014-10-01
development and implementation of the “split-and- spice ” approach required optimization of many independent parameters, which were addressed in parallel...verify the feasibility of the “split and splice” approach for targeting human NSCLC tumor cell lines in culture and prepare the optimized toxins for...for cultured cells (months 2- 8). 2B. To test the efficiency of cell targeting by the toxin variants reconstituted in vitro (months 3-6). 2C. To
2015-01-01
programming formulation of traveling salesman problems , Journal of the ACM, 7(4), 326-329. Montemanni, R., Gambardella, L. M., Rizzoli, A.E., Donati. A.V... salesman problem . BioSystem, 43(1), 73-81. Dror, M., Trudeau, P., 1989. Savings by split delivery routing. Transportation Science, 23, 141- 145. Dror, M...An Ant Colony Optimization and Hybrid Metaheuristics Algorithm to solve the Split Delivery Vehicle Routing Problem Authors: Gautham Rajappa
NASA Astrophysics Data System (ADS)
Wu, Yuechen; Chrysler, Benjamin; Kostuk, Raymond K.
2018-01-01
The technique of designing, optimizing, and fabricating broadband volume transmission holograms using dichromate gelatin (DCG) is summarized for solar spectrum-splitting applications. The spectrum-splitting photovoltaic (PV) system uses a series of single-bandgap PV cells that have different spectral conversion efficiency properties to more fully utilize the solar spectrum. In such a system, one or more high-performance optical filters are usually required to split the solar spectrum and efficiently send them to the corresponding PV cells. An ideal spectral filter should have a rectangular shape with sharp transition wavelengths. A methodology of designing and modeling a transmission DCG hologram using coupled wave analysis for different PV bandgap combinations is described. To achieve a broad diffraction bandwidth and sharp cutoff wavelength, a cascaded structure of multiple thick holograms is described. A search algorithm is then developed to optimize both single- and two-layer cascaded holographic spectrum-splitting elements for the best bandgap combinations of two- and three-junction spectrum-splitting photovoltaic (SSPV) systems illuminated under the AM1.5 solar spectrum. The power conversion efficiencies of the optimized systems are found to be 42.56% and 48.41%, respectively, using the detailed balance method, and show an improvement compared with a tandem multijunction system. A fabrication method for cascaded DCG holographic filters is also described and used to prototype the optimized filter for the three-junction SSPV system.
A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy
Wen, Hui; Xie, Weixin; Pei, Jihong
2016-01-01
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737
Method for nonlinear optimization for gas tagging and other systems
Chen, Ting; Gross, Kenny C.; Wegerich, Stephan
1998-01-01
A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
Submicron Systems Architecture
1983-11-01
hours, and is producing successively more and more refined statistics . It will run for several hundred more hours before improving significantly on the...splitting a node into two parts and connecting to. Similarly, a stuck-open transitor falt ishr t n whi modeled by putting a fault transistor in series
Proposal for optimal placement platform of bikes using queueing networks.
Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu
2016-01-01
In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.
Optimizing Dynamical Network Structure for Pinning Control
NASA Astrophysics Data System (ADS)
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-01
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
Analysis of power management and system latency in wireless sensor networks
NASA Astrophysics Data System (ADS)
Oswald, Matthew T.; Rohwer, Judd A.; Forman, Michael A.
2004-08-01
Successful power management in a wireless sensor network requires optimization of the protocols which affect energy-consumption on each node and the aggregate effects across the larger network. System optimization for a given deployment scenario requires an analysis and trade off of desired node and network features with their associated costs. The sleep protocol for an energy-efficient wireless sensor network for event detection, target classification, and target tracking developed at Sandia National Laboratories is presented. The dynamic source routing (DSR) algorithm is chosen to reduce network maintenance overhead, while providing a self-configuring and self-healing network architecture. A method for determining the optimal sleep time is developed and presented, providing reference data which spans several orders of magnitude. Message timing diagrams show, that a node in a five-node cluster, employing an optimal cyclic single-radio sleep protocol, consumes 3% more energy and incurs a 16-s increase latency than nodes employing the more complex dual-radio STEM protocol.
Archer, Charles J [Rochester, MN; Hardwick, Camesha R [Fayetteville, NC; McCarthy, Patrick J [Rochester, MN; Wallenfelt, Brian P [Eden Prairie, MN
2009-06-23
Methods, parallel computers, and products are provided for identifying messaging completion on a parallel computer. The parallel computer includes a plurality of compute nodes, the compute nodes coupled for data communications by at least two independent data communications networks including a binary tree data communications network optimal for collective operations that organizes the nodes as a tree and a torus data communications network optimal for point to point operations that organizes the nodes as a torus. Embodiments include reading all counters at each node of the torus data communications network; calculating at each node a current node value in dependence upon the values read from the counters at each node; and determining for all nodes whether the current node value for each node is the same as a previously calculated node value for each node. If the current node is the same as the previously calculated node value for all nodes of the torus data communications network, embodiments include determining that messaging is complete and if the current node is not the same as the previously calculated node value for all nodes of the torus data communications network, embodiments include determining that messaging is currently incomplete.
Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma
2018-04-12
Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.
Memory-induced mechanism for self-sustaining activity in networks
NASA Astrophysics Data System (ADS)
Allahverdyan, A. E.; Steeg, G. Ver; Galstyan, A.
2015-12-01
We study a mechanism of activity sustaining on networks inspired by a well-known model of neuronal dynamics. Our primary focus is the emergence of self-sustaining collective activity patterns, where no single node can stay active by itself, but the activity provided initially is sustained within the collective of interacting agents. In contrast to existing models of self-sustaining activity that are caused by (long) loops present in the network, here we focus on treelike structures and examine activation mechanisms that are due to temporal memory of the nodes. This approach is motivated by applications in social media, where long network loops are rare or absent. Our results suggest that under a weak behavioral noise, the nodes robustly split into several clusters, with partial synchronization of nodes within each cluster. We also study the randomly weighted version of the models where the nodes are allowed to change their connection strength (this can model attention redistribution) and show that it does facilitate the self-sustained activity.
Topological edge states in ultra thin Bi(110) puckered crystal lattice
NASA Astrophysics Data System (ADS)
Wang, Baokai; Hsu, Chuanghan; Chang, Guoqing; Lin, Hsin; Bansil, Arun
We discuss the electronic structure of a 2-ML Bi(110) film with a crystal structure similar to that of black phosphorene. In the absence of Spin-Orbit coupling (SOC), the film is found to be a semimetal with two kinds of Dirac cones, which are classified by their locations in the Brillouin zone. All Dirac nodes are protected by crystal symmetry and carry non-zero winding numbers. When considering ribbons, along specific directions, projections of Dirac nodes serve as starting or ending points of edge bands depending on the sign of their carried winding number. After the inclusion of the SOC, all Dirac nodes are gapped out. Correspondingly, the edge states connecting Dirac nodes split and cross each other, and thus form a Dirac node at the boundary of the 1D Brillouin zone, which suggests that the system is a Quantum Spin Hall insulator. The nontrivial Quantum Spin Hall phase is also confirmed by counting the product of parities of the occupied bands at time-reversal invariant points.
Optimizing fusion PIC code performance at scale on Cori Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, T. S.; Deslippe, J.
In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less
A Power-Optimized Cooperative MAC Protocol for Lifetime Extension in Wireless Sensor Networks.
Liu, Kai; Wu, Shan; Huang, Bo; Liu, Feng; Xu, Zhen
2016-10-01
In wireless sensor networks, in order to satisfy the requirement of long working time of energy-limited nodes, we need to design an energy-efficient and lifetime-extended medium access control (MAC) protocol. In this paper, a node cooperation mechanism that one or multiple nodes with higher channel gain and sufficient residual energy help a sender relay its data packets to its recipient is employed to achieve this objective. We first propose a transmission power optimization algorithm to prolong network lifetime by optimizing the transmission powers of the sender and its cooperative nodes to maximize their minimum residual energy after their data packet transmissions. Based on it, we propose a corresponding power-optimized cooperative MAC protocol. A cooperative node contention mechanism is designed to ensure that the sender can effectively select a group of cooperative nodes with the lowest energy consumption and the best channel quality for cooperative transmissions, thus further improving the energy efficiency. Simulation results show that compared to typical MAC protocol with direct transmissions and energy-efficient cooperative MAC protocol, the proposed cooperative MAC protocol can efficiently improve the energy efficiency and extend the network lifetime.
A Power-Optimized Cooperative MAC Protocol for Lifetime Extension in Wireless Sensor Networks
Liu, Kai; Wu, Shan; Huang, Bo; Liu, Feng; Xu, Zhen
2016-01-01
In wireless sensor networks, in order to satisfy the requirement of long working time of energy-limited nodes, we need to design an energy-efficient and lifetime-extended medium access control (MAC) protocol. In this paper, a node cooperation mechanism that one or multiple nodes with higher channel gain and sufficient residual energy help a sender relay its data packets to its recipient is employed to achieve this objective. We first propose a transmission power optimization algorithm to prolong network lifetime by optimizing the transmission powers of the sender and its cooperative nodes to maximize their minimum residual energy after their data packet transmissions. Based on it, we propose a corresponding power-optimized cooperative MAC protocol. A cooperative node contention mechanism is designed to ensure that the sender can effectively select a group of cooperative nodes with the lowest energy consumption and the best channel quality for cooperative transmissions, thus further improving the energy efficiency. Simulation results show that compared to typical MAC protocol with direct transmissions and energy-efficient cooperative MAC protocol, the proposed cooperative MAC protocol can efficiently improve the energy efficiency and extend the network lifetime. PMID:27706079
Growing optimal scale-free networks via likelihood
NASA Astrophysics Data System (ADS)
Small, Michael; Li, Yingying; Stemler, Thomas; Judd, Kevin
2015-04-01
Preferential attachment, by which new nodes attach to existing nodes with probability proportional to the existing nodes' degree, has become the standard growth model for scale-free networks, where the asymptotic probability of a node having degree k is proportional to k-γ. However, the motivation for this model is entirely ad hoc. We use exact likelihood arguments and show that the optimal way to build a scale-free network is to attach most new links to nodes of low degree. Curiously, this leads to a scale-free network with a single dominant hub: a starlike structure we call a superstar network. Asymptotically, the optimal strategy is to attach each new node to one of the nodes of degree k with probability proportional to 1/N +ζ (γ ) (k+1 ) γ (in a N node network): a stronger bias toward high degree nodes than exhibited by standard preferential attachment. Our algorithm generates optimally scale-free networks (the superstar networks) as well as randomly sampling the space of all scale-free networks with a given degree exponent γ . We generate viable realization with finite N for 1 ≪γ <2 as well as γ >2 . We observe an apparently discontinuous transition at γ ≈2 between so-called superstar networks and more treelike realizations. Gradually increasing γ further leads to reemergence of a superstar hub. To quantify these structural features, we derive a new analytic expression for the expected degree exponent of a pure preferential attachment process and introduce alternative measures of network entropy. Our approach is generic and can also be applied to an arbitrary degree distribution.
Yu, Shanen; Xu, Yiming; Jiang, Peng; Wu, Feng; Xu, Huan
2017-01-01
At present, free-to-move node self-deployment algorithms aim at event coverage and cannot improve network coverage under the premise of considering network connectivity, network reliability and network deployment energy consumption. Thus, this study proposes pigeon-based self-deployment algorithm (PSA) for underwater wireless sensor networks to overcome the limitations of these existing algorithms. In PSA, the sink node first finds its one-hop nodes and maximizes the network coverage in its one-hop region. The one-hop nodes subsequently divide the network into layers and cluster in each layer. Each cluster head node constructs a connected path to the sink node to guarantee network connectivity. Finally, the cluster head node regards the ratio of the movement distance of the node to the change in the coverage redundancy ratio as the target function and employs pigeon swarm optimization to determine the positions of the nodes. Simulation results show that PSA improves both network connectivity and network reliability, decreases network deployment energy consumption, and increases network coverage. PMID:28338615
Kang, Moon-Sung; Choi, Yong-Jin; Moon, Seung-Hyeon
2004-05-15
An approach to enhancing the water-splitting performance of bipolar membranes (BPMs) is introducing an inorganic substance at the bipolar (BP) junction. In this study, the immobilization of inorganic matters (i.e., iron hydroxides and silicon compounds) at the BP junction and the optimum concentration have been investigated. To immobilize these inorganic matters, novel methods (i.e., electrodeposition of the iron hydroxide and processing of the sol-gel to introduce silicon groups at the BP junction) were suggested. At optimal concentrations, the immobilized inorganic matters significantly enhanced the water-splitting fluxes, indicating that they provide alternative paths for water dissociation, but on the other hand possibly reduce the polarization of water molecules between the sulfonic acid and quaternary ammonium groups at high contents. Consequently, the amount of inorganic substances introduced should be optimized to obtain the maximum water splitting in the BPM.
Optimal resource allocation strategy for two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu
2018-02-01
We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.
Clark, M. Collins; Coleman, P. Dale; Marder, Barry M.
1993-01-01
A compact device called the split cavity modulator whose self-generated oscillating electromagnetic field converts a steady particle beam into a modulated particle beam. The particle beam experiences both signs of the oscillating electric field during the transit through the split cavity modulator. The modulated particle beam can then be used to generate microwaves at that frequency and through the use of extractors, high efficiency extraction of microwave power is enabled. The modulated beam and the microwave frequency can be varied by the placement of resistive wires at nodes of oscillation within the cavity. The short beam travel length through the cavity permit higher currents because both space charge and pinching limitations are reduced. The need for an applied magnetic field to control the beam has been eliminated.
Clark, M.C.; Coleman, P.D.; Marder, B.M.
1993-08-10
A compact device called the split cavity modulator whose self-generated oscillating electromagnetic field converts a steady particle beam into a modulated particle beam. The particle beam experiences both signs of the oscillating electric field during the transit through the split cavity modulator. The modulated particle beam can then be used to generate microwaves at that frequency and through the use of extractors, high efficiency extraction of microwave power is enabled. The modulated beam and the microwave frequency can be varied by the placement of resistive wires at nodes of oscillation within the cavity. The short beam travel length through the cavity permit higher currents because both space charge and pinching limitations are reduced. The need for an applied magnetic field to control the beam has been eliminated.
Light distribution in diffractive multifocal optics and its optimization.
Portney, Valdemar
2011-11-01
To expand a geometrical model of diffraction efficiency and its interpretation to the multifocal optic and to introduce formulas for analysis of far and near light distribution and their application to multifocal intraocular lenses (IOLs) and to diffraction efficiency optimization. Medical device consulting firm, Newport Coast, California, USA. Experimental study. Application of a geometrical model to the kinoform (single focus diffractive optical element) was expanded to a multifocal optic to produce analytical definitions of light split between far and near images and light loss to other diffraction orders. The geometrical model gave a simple interpretation of light split in a diffractive multifocal IOL. An analytical definition of light split between far, near, and light loss was introduced as curve fitting formulas. Several examples of application to common multifocal diffractive IOLs were developed; for example, to light-split change with wavelength. The analytical definition of diffraction efficiency may assist in optimization of multifocal diffractive optics that minimize light loss. Formulas for analysis of light split between different foci of multifocal diffractive IOLs are useful in interpreting diffraction efficiency dependence on physical characteristics, such as blaze heights of the diffractive grooves and wavelength of light, as well as for optimizing multifocal diffractive optics. Disclosure is found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Guoping; Udupa, Jayaram K.; Tong, Yubing; Cao, Hanqiang; Odhner, Dewey; Torigian, Drew A.; Wu, Xingyu
2018-03-01
Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.
Optimal navigation for characterizing the role of the nodes in complex networks
NASA Astrophysics Data System (ADS)
Cajueiro, Daniel O.
2010-05-01
In this paper, we explore how the approach of optimal navigation (Cajueiro (2009) [33]) can be used to evaluate the centrality of a node and to characterize its role in a network. Using the subway network of Boston and the London rapid transit rail as proxies for complex networks, we show that the centrality measures inherited from the approach of optimal navigation may be considered if one desires to evaluate the centrality of the nodes using other pieces of information beyond the geometric properties of the network. Furthermore, evaluating the correlations between these inherited measures and classical measures of centralities such as the degree of a node and the characteristic path length of a node, we have found two classes of results. While for the London rapid transit rail, these inherited measures can be easily explained by these classical measures of centrality, for the Boston underground transportation system we have found nontrivial results.
Applications Performance Under MPL and MPI on NAS IBM SP2
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)
1994-01-01
On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.
A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.
Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.
Ultrascalable petaflop parallel supercomputer
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY
2010-07-20
A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.
Self-adaptive formation of uneven node spacings in wild bamboo
NASA Astrophysics Data System (ADS)
Shima, Hiroyuki; Sato, Motohiro; Inoue, Akio
2016-02-01
Bamboo has a distinctive structure wherein a long cavity inside a cylindrical woody section is divided into many chambers by stiff diaphragms. The diaphragms are inserted at nodes and thought to serve as ring stiffeners for bamboo culms against the external load; if this is the case, the separation between adjacent nodes should be configured optimally in order to enhance the mechanical stability of the culms. Here, we reveal the hitherto unknown blueprint of the optimal node spacings used in the growth of wild bamboo. Measurement data analysis together with theoretical formulations suggest that wild bamboos effectively control their node spacings as well as other geometric parameters in accord with the lightweight and high-strength design concept.
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
Field-Based Optimal Placement of Antennas for Body-Worn Wireless Sensors
Januszkiewicz, Łukasz; Di Barba, Paolo; Hausman, Sławomir
2016-01-01
We investigate a case of automated energy-budget-aware optimization of the physical position of nodes (sensors) in a Wireless Body Area Network (WBAN). This problem has not been presented in the literature yet, as opposed to antenna and routing optimization, which are relatively well-addressed. In our research, which was inspired by a safety-critical application for firefighters, the sensor network consists of three nodes located on the human body. The nodes communicate over a radio link operating in the 2.4 GHz or 5.8 GHz ISM frequency band. Two sensors have a fixed location: one on the head (earlobe pulse oximetry) and one on the arm (with accelerometers, temperature and humidity sensors, and a GPS receiver), while the position of the third sensor can be adjusted within a predefined region on the wearer’s chest. The path loss between each node pair strongly depends on the location of the nodes and is difficult to predict without performing a full-wave electromagnetic simulation. Our optimization scheme employs evolutionary computing. The novelty of our approach lies not only in the formulation of the problem but also in linking a fully automated optimization procedure with an electromagnetic simulator and a simplified human body model. This combination turns out to be a computationally effective solution, which, depending on the initial placement, has a potential to improve performance of our example sensor network setup by up to about 20 dB with respect to the path loss between selected nodes. PMID:27196911
Efficient Implementation of MrBayes on Multi-GPU
Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang
2013-01-01
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)3), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)3 Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)3 (aMCMCMC) for MrBayes (MC)3 on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new “node-by-node” task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)3 achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)3 is dramatically faster than all the previous (MC)3 algorithms and scales well to large GPU clusters. PMID:23493260
The EarthServer Federation: State, Role, and Contribution to GEOSS
NASA Astrophysics Data System (ADS)
Merticariu, Vlad; Baumann, Peter
2016-04-01
The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.
Influence maximization in complex networks through optimal percolation
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Makse, Hernán A.
2015-08-01
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Influence maximization in complex networks through optimal percolation.
Morone, Flaviano; Makse, Hernán A
2015-08-06
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
Tracking trade transactions in water resource systems: A node-arc optimization formulation
NASA Astrophysics Data System (ADS)
Erfani, Tohid; Huskova, Ivana; Harou, Julien J.
2013-05-01
We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).
Rapid and continuous magnetic separation in droplet microfluidic devices.
Brouzes, Eric; Kruse, Travis; Kimmerling, Robert; Strey, Helmut H
2015-02-07
We present a droplet microfluidic method to extract molecules of interest from a droplet in a rapid and continuous fashion. We accomplish this by first marginalizing functionalized super-paramagnetic beads within the droplet using a magnetic field, and then splitting the droplet into one droplet containing the majority of magnetic beads and one droplet containing the minority fraction. We quantitatively analysed the factors which affect the efficiency of marginalization and droplet splitting to optimize the enrichment of magnetic beads. We first characterized the interplay between the droplet velocity and the strength of the magnetic field and its effect on marginalization. We found that marginalization is optimal at the midline of the magnet and that marginalization is a good predictor of bead enrichment through splitting at low to moderate droplet velocities. Finally, we focused our efforts on manipulating the splitting profile to improve the enrichment provided by asymmetric splitting. We designed asymmetric splitting forks that employ capillary effects to preferentially extract the bead-rich regions of the droplets. Our strategy represents a framework to optimize magnetic bead enrichment methods tailored to the requirements of specific droplet-based applications. We anticipate that our separation technology is well suited for applications in single-cell genomics and proteomics. In particular, our method could be used to separate mRNA bound to poly-dT functionalized magnetic microparticles from single cell lysates to prepare single-cell cDNA libraries.
Rapid and continuous magnetic separation in droplet microfluidic devices
Brouzes, Eric; Kruse, Travis; Kimmerling, Robert; Strey, Helmut H.
2015-01-01
We present a droplet microfluidic method to extract molecules of interest from a droplet in a rapid and continuous fashion. We accomplish this by first marginalizing functionalized super-paramagnetic beads within the droplet using a magnetic field, and then splitting the droplet into one droplet containing the majority of magnetic beads and one droplet containing the minority fraction. We quantitatively analysed the factors which affect the efficiency of marginalization and droplet splitting to optimize the enrichment of magnetic beads. We first characterized the interplay between the droplet velocity and the strength of the magnetic field and its effect on marginalization. We found that marginalization is optimal at the midline of the magnet and that marginalization is a good predictor of bead enrichment through splitting at low to moderate droplet velocities. Finally, we focused our efforts on manipulating the splitting profile to improve the enrichment provided by asymmetric splitting. We designed asymmetric splitting forks that employ capillary effects to preferentially extract the bead-rich regions of the droplets. Our strategy represents a framework to optimize magnetic bead enrichment methods tailored to the requirements of specific droplet-based applications. We anticipate that our separation technology is well suited for applications in single-cell genomics and proteomics. In particular, our method could be used to separate mRNA bound to poly-dT functionalized magnetic microparticles from single cell lysates to prepare single-cell cDNA libraries. PMID:25501881
An enhanced performance through agent-based secure approach for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Bisen, Dhananjay; Sharma, Sanjeev
2018-01-01
This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.
Finding influential nodes for integration in brain networks using optimal percolation theory.
Del Ferraro, Gino; Moreno, Andrea; Min, Byungjoon; Morone, Flaviano; Pérez-Ramírez, Úrsula; Pérez-Cervera, Laura; Parra, Lucas C; Holodny, Andrei; Canals, Santiago; Makse, Hernán A
2018-06-11
Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here, we apply optimal percolation theory and pharmacogenetic interventions in vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.
Adaptive critics for dynamic optimization.
Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar
2010-06-01
A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. Copyright 2010 Elsevier Ltd. All rights reserved.
Comments on the Diffusive Behavior of Two Upwind Schemes
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1998-01-01
The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and locally one-dimensional finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2.5 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a speedup of 29 over finite volume.
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
Alanazi, Adwan; Elleithy, Khaled
2016-09-07
Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node- continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches.
Locating influential nodes in complex networks
Malliaros, Fragkiskos D.; Rossi, Maria-Evgenia G.; Vazirgiannis, Michalis
2016-01-01
Understanding and controlling spreading processes in networks is an important topic with many diverse applications, including information dissemination, disease propagation and viral marketing. It is of crucial importance to identify which entities act as influential spreaders that can propagate information to a large portion of the network, in order to ensure efficient information diffusion, optimize available resources or even control the spreading. In this work, we capitalize on the properties of the K-truss decomposition, a triangle-based extension of the core decomposition of graphs, to locate individual influential nodes. Our analysis on real networks indicates that the nodes belonging to the maximal K-truss subgraph show better spreading behavior compared to previously used importance criteria, including node degree and k-core index, leading to faster and wider epidemic spreading. We further show that nodes belonging to such dense subgraphs, dominate the small set of nodes that achieve the optimal spreading in the network. PMID:26776455
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
NASA Astrophysics Data System (ADS)
Dalguer, L. A.; Day, S. M.
2006-12-01
Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.
Prediction-based Dynamic Energy Management in Wireless Sensor Networks
Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei
2007-01-01
Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon
2014-01-01
We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763
Alanazi, Adwan; Elleithy, Khaled
2016-01-01
Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node– continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches. PMID:27618048
Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.
Shillcock, R; Ellison, T M; Monaghan, P
2000-10-01
Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.
Rapid and continuous magnetic separation in droplet microfluidic devices
Brouzes, Eric; Kruse, Travis; Kimmerling, Robert; ...
2014-12-03
Here, we present a droplet microfluidic method to extract molecules of interest from a droplet in a rapid and continuous fashion. We accomplish this by first marginalizing functionalized super-paramagnetic beads within the droplet using a magnetic field, and then splitting the droplet into one droplet containing the majority of magnetic beads and one droplet containing the minority fraction. We quantitatively analysed the factors which affect the efficiency of marginalization and droplet splitting to optimize the enrichment of magnetic beads. We first characterized the interplay between the droplet velocity and the strength of the magnetic field and its effect on marginalization.more » We found that marginalization is optimal at the midline of the magnet and that marginalization is a good predictor of bead enrichment through splitting at low to moderate droplet velocities. Finally, we focused our efforts on manipulating the splitting profile to improve the enrichment provided by asymmetric splitting. We designed asymmetric splitting forks that employ capillary effects to preferentially extract the bead-rich regions of the droplets. Our strategy represents a framework to optimize magnetic bead enrichment methods tailored to the requirements of specific droplet-based applications. We anticipate that our separation technology is well suited for applications in single-cell genomics and proteomics. In particular, our method could be used to separate mRNA bound to poly-dT functionalized magnetic microparticles from single cell lysates to prepare single-cell cDNA libraries.« less
Distributed clone detection in static wireless sensor networks: random walk with network division.
Khan, Wazir Zada; Aalsalem, Mohammed Y; Saad, N M
2015-01-01
Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads.
A group evolving-based framework with perturbations for link prediction
NASA Astrophysics Data System (ADS)
Si, Cuiqi; Jiao, Licheng; Wu, Jianshe; Zhao, Jin
2017-06-01
Link prediction is a ubiquitous application in many fields which uses partially observed information to predict absence or presence of links between node pairs. The group evolving study provides reasonable explanations on the behaviors of nodes, relations between nodes and community formation in a network. Possible events in group evolution include continuing, growing, splitting, forming and so on. The changes discovered in networks are to some extent the result of these events. In this work, we present a group evolving-based characterization of node's behavioral patterns, and via which we can estimate the probability they tend to interact. In general, the primary aim of this paper is to offer a minimal toy model to detect missing links based on evolution of groups and give a simpler explanation on the rationality of the model. We first introduce perturbations into networks to obtain stable cluster structures, and the stable clusters determine the stability of each node. Then fluctuations, another node behavior, are assumed by the participation of each node to its own belonging group. Finally, we demonstrate that such characteristics allow us to predict link existence and propose a model for link prediction which outperforms many classical methods with a decreasing computational time in large scales. Encouraging experimental results obtained on real networks show that our approach can effectively predict missing links in network, and even when nearly 40% of the edges are missing, it also retains stationary performance.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
NASA Astrophysics Data System (ADS)
Wang, Yimin; Braams, Bastiaan J.; Bowman, Joel M.; Carter, Stuart; Tew, David P.
2008-06-01
Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcal/mol, in excellent agreement with the reported ab initio value. Model one-dimensional and ``exact'' full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased ``fixed-node'' diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm-1 in Cartesian coordinates and 22.6 cm-1 in normal coordinates, with an uncertainty of 2-3 cm-1. This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm-1. The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm-1. These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm-1, and agree well with the experimental values of 21.6 and 2.9 cm-1 for the H and D transfer, respectively.
Wang, Yimin; Braams, Bastiaan J; Bowman, Joel M; Carter, Stuart; Tew, David P
2008-06-14
Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcalmol, in excellent agreement with the reported ab initio value. Model one-dimensional and "exact" full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased "fixed-node" diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm(-1) in Cartesian coordinates and 22.6 cm(-1) in normal coordinates, with an uncertainty of 2-3 cm(-1). This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm(-1). The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm(-1). These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm(-1), and agree well with the experimental values of 21.6 and 2.9 cm(-1) for the H and D transfer, respectively.
Analytical approach to cross-layer protocol optimization in wireless sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in terms of the concatenated protocol parameters. New source-to-destination routes are sought that optimize cross-layer interdependencies to achieve the "best available" performance in the WSN. The protocol design, modified from a known reactive protocol, adapts the achievable performance to the transient network conditions and resource levels. Control of network behavior is realized through the conditional rates of the MVPPs. Optimal cross-layer protocol parameters are determined by stochastic dynamic programming conditions derived from models of transient packetized sensor data flows. Moreover, the defining conditions for WSN configurations, grouping sensor nodes into clusters and establishing data aggregation at processing nodes within those clusters, lead to computationally tractable solutions to the stochastic differential equations that describe network dynamics. Closed-form solution characteristics provide an alternative to the "directed diffusion" methods for resource-efficient WSN protocols published previously by other researchers. Performance verification of the resulting cross-layer designs is found by embedding the optimality conditions for the protocols in actual WSN scenarios replicated in a wireless network simulation environment. Performance tradeoffs among protocol parameters remain for a sequel to the paper.
A Bayesian Sampler for Optimization of Protein Domain Hierarchies
2014-01-01
Abstract The process of identifying and modeling functionally divergent subgroups for a specific protein domain class and arranging these subgroups hierarchically has, thus far, largely been done via manual curation. How to accomplish this automatically and optimally is an unsolved statistical and algorithmic problem that is addressed here via Markov chain Monte Carlo sampling. Taking as input a (typically very large) multiple-sequence alignment, the sampler creates and optimizes a hierarchy by adding and deleting leaf nodes, by moving nodes and subtrees up and down the hierarchy, by inserting or deleting internal nodes, and by redefining the sequences and conserved patterns associated with each node. All such operations are based on a probability distribution that models the conserved and divergent patterns defining each subgroup. When we view these patterns as sequence determinants of protein function, each node or subtree in such a hierarchy corresponds to a subgroup of sequences with similar biological properties. The sampler can be applied either de novo or to an existing hierarchy. When applied to 60 protein domains from multiple starting points in this way, it converged on similar solutions with nearly identical log-likelihood ratio scores, suggesting that it typically finds the optimal peak in the posterior probability distribution. Similarities and differences between independently generated, nearly optimal hierarchies for a given domain help distinguish robust from statistically uncertain features. Thus, a future application of the sampler is to provide confidence measures for various features of a domain hierarchy. PMID:24494927
Optimization of pressure gauge locations for water distribution systems using entropy theory.
Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon
2012-12-01
It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.
Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le
2016-07-14
Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.
Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le
2016-01-01
Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970
Energy efficient sensor scheduling with a mobile sink node for the target tracking application.
Maheswararajah, Suhinthan; Halgamuge, Saman; Premaratne, Malin
2009-01-01
Measurement losses adversely affect the performance of target tracking. The sensor network's life span depends on how efficiently the sensor nodes consume energy. In this paper, we focus on minimizing the total energy consumed by the sensor nodes whilst avoiding measurement losses. Since transmitting data over a long distance consumes a significant amount of energy, a mobile sink node collects the measurements and transmits them to the base station. We assume that the default transmission range of the activated sensor node is limited and it can be increased to maximum range only if the mobile sink node is out-side the default transmission range. Moreover, the active sensor node can be changed after a certain time period. The problem is to select an optimal sensor sequence which minimizes the total energy consumed by the sensor nodes. In this paper, we consider two different problems depend on the mobile sink node's path. First, we assume that the mobile sink node's position is known for the entire time horizon and use the dynamic programming technique to solve the problem. Second, the position of the sink node is varied over time according to a known Markov chain, and the problem is solved by stochastic dynamic programming. We also present sub-optimal methods to solve our problem. A numerical example is presented in order to discuss the proposed methods' performance.
Energy Efficient Sensor Scheduling with a Mobile Sink Node for the Target Tracking Application
Maheswararajah, Suhinthan; Halgamuge, Saman; Premaratne, Malin
2009-01-01
Measurement losses adversely affect the performance of target tracking. The sensor network's life span depends on how efficiently the sensor nodes consume energy. In this paper, we focus on minimizing the total energy consumed by the sensor nodes whilst avoiding measurement losses. Since transmitting data over a long distance consumes a significant amount of energy, a mobile sink node collects the measurements and transmits them to the base station. We assume that the default transmission range of the activated sensor node is limited and it can be increased to maximum range only if the mobile sink node is out-side the default transmission range. Moreover, the active sensor node can be changed after a certain time period. The problem is to select an optimal sensor sequence which minimizes the total energy consumed by the sensor nodes. In this paper, we consider two different problems depend on the mobile sink node's path. First, we assume that the mobile sink node's position is known for the entire time horizon and use the dynamic programming technique to solve the problem. Second, the position of the sink node is varied over time according to a known Markov chain, and the problem is solved by stochastic dynamic programming. We also present sub-optimal methods to solve our problem. A numerical example is presented in order to discuss the proposed methods' performance PMID:22399934
Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.
Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei
2017-12-04
Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.
Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search
Zhang, Senlin; Zhang, Qunfei
2017-01-01
Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541
Systems and methods for optimal power flow on a radial network
Low, Steven H.; Peng, Qiuyu
2018-04-24
Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.
Zhao, Yanfeng; Li, Xiaolu; Wang, Xiaoyi; Lin, Meng; Zhao, Xinming; Luo, Dehong; Li, Jianying
2017-01-01
Background To investigate the value of single-source dual-energy spectral CT imaging in improving the accuracy of preoperative diagnosis of lymph node metastasis of thyroid carcinoma. Methods Thirty-four thyroid carcinoma patients were enrolled and received spectral CT scanning before thyroidectomy and cervical lymph node dissection surgery. Iodine-based material decomposition (MD) images and 101 sets of monochromatic images from 40 to 140 keV were reconstructed after CT scans. The iodine concentrations (IC) of lymph nodes were measured on the MD images and was normalized to that of common carotid artery to obtain the normalized iodine concentration (NIC). The CT number of lymph nodes as function of photon energy was measured on the 101 sets of images to generate a spectral HU curve and to calculate its slope λHU. The measurements between the metastatic and non-metastatic lymph nodes were statistically compared and receiver operating characteristic (ROC) curves were used to determine the optimal thresholds of these measurements for diagnosing lymph nodes metastasis. Results There were 136 lymph nodes that were pathologically confirmed. Among them, 102 (75%) were metastatic and 34 (25%) were non-metastatic. The IC, NIC and the slope λHU of the metastatic lymph nodes were 3.93±1.58 mg/mL, 0.70±0.55 and 4.63±1.91, respectively. These values were statistically higher than the respective values of 1.77±0.71 mg/mL, 0.29±0.16 and 2.19±0.91 for the non-metastatic lymph nodes (all P<0.001). ROC analysis determined the optimal diagnostic threshold for IC as 2.56 mg/mL, with the sensitivity, specificity and accuracy of 83.3%, 91.2% and 85.3%, respectively. The optimal threshold for NIC was 0.289, with the sensitivity, specificity and accuracy of 96.1%, 76.5% and 91.2%, respectively. The optimal threshold for the spectral curve slope λHU was 2.692, with the sensitivity, specificity and accuracy of 88.2%, 82.4% and 86.8%, respectively. Conclusions The measurements obtained in dual-energy spectral CT improve the sensitivity and accuracy for preoperatively diagnosing lymph node metastasis in thyroid carcinoma. PMID:29268547
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
A Systematic Software, Firmware, and Hardware Codesign Methodology for Digital Signal Processing
2014-03-01
possible mappings ...................................................60 Table 25. Possible optimal leaf -nodes... size weight and power UAV unmanned aerial vehicle UHF ultra-high frequency UML universal modeling language Verilog verify logic VHDL VHSIC...optimal leaf -nodes to some design patterns for embedded system design. Software and hardware partitioning is a very difficult challenge in the field of
Information transmission on hybrid networks
NASA Astrophysics Data System (ADS)
Chen, Rongbin; Cui, Wei; Pu, Cunlai; Li, Jie; Ji, Bo; Gakis, Konstantinos; Pardalos, Panos M.
2018-01-01
Many real-world communication networks often have hybrid nature with both fixed nodes and moving modes, such as the mobile phone networks mainly composed of fixed base stations and mobile phones. In this paper, we discuss the information transmission process on the hybrid networks with both fixed and mobile nodes. The fixed nodes (base stations) are connected as a spatial lattice on the plane forming the information-carrying backbone, while the mobile nodes (users), which are the sources and destinations of information packets, connect to their current nearest fixed nodes respectively to deliver and receive information packets. We observe the phase transition of traffic load in the hybrid network when the packet generation rate goes from below and then above a critical value, which measures the network capacity of packets delivery. We obtain the optimal speed of moving nodes leading to the maximum network capacity. We further improve the network capacity by rewiring the fixed nodes and by considering the current load of fixed nodes during packets transmission. Our purpose is to optimize the network capacity of hybrid networks from the perspective of network science, and provide some insights for the construction of future communication infrastructures.
A Low Power IoT Sensor Node Architecture for Waste Management Within Smart Cities Context.
Cerchecci, Matteo; Luti, Francesco; Mecocci, Alessandro; Parrino, Stefano; Peruzzi, Giacomo; Pozzebon, Alessandro
2018-04-21
This paper focuses on the realization of an Internet of Things (IoT) architecture to optimize waste management in the context of Smart Cities. In particular, a novel typology of sensor node based on the use of low cost and low power components is described. This node is provided with a single-chip microcontroller, a sensor able to measure the filling level of trash bins using ultrasounds and a data transmission module based on the LoRa LPWAN (Low Power Wide Area Network) technology. Together with the node, a minimal network architecture was designed, based on a LoRa gateway, with the purpose of testing the IoT node performances. Especially, the paper analyzes in detail the node architecture, focusing on the energy saving technologies and policies, with the purpose of extending the batteries lifetime by reducing power consumption, through hardware and software optimization. Tests on sensor and radio module effectiveness are also presented.
Hop Optimization and Relay Node Selection in Multi-hop Wireless Ad-Hoc Networks
NASA Astrophysics Data System (ADS)
Li, Xiaohua(Edward)
In this paper we propose an efficient approach to determine the optimal hops for multi-hop ad hoc wireless networks. Based on the assumption that nodes use successive interference cancellation (SIC) and maximal ratio combining (MRC) to deal with mutual interference and to utilize all the received signal energy, we show that the signal-to-interference-plus-noise ratio (SINR) of a node is determined only by the nodes before it, not the nodes after it, along a packet forwarding path. Based on this observation, we propose an iterative procedure to select the relay nodes and to calculate the path SINR as well as capacity of an arbitrary multi-hop packet forwarding path. The complexity of the algorithm is extremely low, and scaling well with network size. The algorithm is applicable in arbitrarily large networks. Its performance is demonstrated as desirable by simulations. The algorithm can be helpful in analyzing the performance of multi-hop wireless networks.
A Low Power IoT Sensor Node Architecture for Waste Management Within Smart Cities Context
Cerchecci, Matteo; Luti, Francesco; Mecocci, Alessandro; Parrino, Stefano; Peruzzi, Giacomo
2018-01-01
This paper focuses on the realization of an Internet of Things (IoT) architecture to optimize waste management in the context of Smart Cities. In particular, a novel typology of sensor node based on the use of low cost and low power components is described. This node is provided with a single-chip microcontroller, a sensor able to measure the filling level of trash bins using ultrasounds and a data transmission module based on the LoRa LPWAN (Low Power Wide Area Network) technology. Together with the node, a minimal network architecture was designed, based on a LoRa gateway, with the purpose of testing the IoT node performances. Especially, the paper analyzes in detail the node architecture, focusing on the energy saving technologies and policies, with the purpose of extending the batteries lifetime by reducing power consumption, through hardware and software optimization. Tests on sensor and radio module effectiveness are also presented. PMID:29690552
Influence maximization in complex networks through optimal percolation
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)
Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting
2016-01-28
Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes' placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
Wireless Sensor Networks - Node Localization for Various Industry Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derr, Kurt; Manic, Milos
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
Wireless Sensor Networks - Node Localization for Various Industry Problems
Derr, Kurt; Manic, Milos
2015-06-01
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks
Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo
2011-01-01
Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed. PMID:22346584
Self-configuration and self-optimization process in heterogeneous wireless networks.
Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo
2011-01-01
Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network's scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed.
On the Study of Cognitive Bidirectional Relaying with Asymmetric Traffic Demands
NASA Astrophysics Data System (ADS)
Ji, Xiaodong
2015-05-01
In this paper, we consider a cognitive radio network scenario, where two primary users want to exchange information with each other and meanwhile, one secondary node wishes to send messages to a cognitive base station. To meet the target quality of service (QoS) of the primary users and raise the communication opportunity of the secondary nodes, a cognitive bidirectional relaying (BDR) scheme is examined. First, system outage probabilities of conventional direct transmission and BDR schemes are presented. Next, a new system parameter called operating region is defined and calculated, which indicates in which position a secondary node can be a cognitive relay to assist the primary users. Then, a cognitive BDR scheme is proposed, giving a transmission protocol along with a time-slot splitting algorithm between the primary and secondary transmissions. Information-theoretic metric of ergodic capacity is studied for the cognitive BDR scheme to evaluate its performance. Simulation results show that with the proposed scheme, the target QoS of the primary users can be guaranteed, while increasing the communication opportunity for the secondary nodes.
Small worlds in space: Synchronization, spatial and relational modularity
NASA Astrophysics Data System (ADS)
Brede, M.
2010-06-01
In this letter we investigate networks that have been optimized to realize a trade-off between enhanced synchronization and cost of wire to connect the nodes in space. Analyzing the evolved arrangement of nodes in space and their corresponding network topology, a class of small-world networks characterized by spatial and network modularity is found. More precisely, for low cost of wire optimal configurations are characterized by a division of nodes into two spatial groups with maximum distance from each other, whereas network modularity is low. For high cost of wire, the nodes organize into several distinct groups in space that correspond to network modules connected on a ring. In between, spatially and relationally modular small-world networks are found.
BFL: a node and edge betweenness based fast layout algorithm for large scale networks
Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru
2009-01-01
Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673
Conceptual Design and Optimal Power Control Strategy for AN Eco-Friendly Hybrid Vehicle
NASA Astrophysics Data System (ADS)
Nasiri, N. Mir; Chieng, Frederick T. A.
2011-06-01
This paper presents a new concept for a hybrid vehicle using a torque and speed splitting technique. It is implemented by the newly developed controller in combination with a two degree of freedom epicyclic gear transmission. This approach enables optimization of the power split between the less powerful electrical motor and more powerful engine while driving a car load. The power split is fundamentally a dual-energy integration mechanism as it is implemented by using the epicyclic gear transmission that has two inputs and one output for a proper power distribution. The developed power split control system manages the operation of both the inputs to have a known output with the condition of maintaining optimum operating efficiency of the internal combustion engine and electrical motor. This system has a huge potential as it is possible to integrate all the features of hybrid vehicle known to-date such as the regenerative braking system, series hybrid, parallel hybrid, series/parallel hybrid, and even complex hybrid (bidirectional). By using the new power split system it is possible to further reduce fuel consumption and increase overall efficiency.
Lee, JongHyup; Pak, Dohyun
2016-01-01
For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743
Zhang, Lin; Yin, Na; Fu, Xiong; Lin, Qiaomin; Wang, Ruchuan
2017-01-01
With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems. This paper introduces a multi-attribute pheromone ant secure routing algorithm based on reputation value (MPASR). This algorithm can reduce the energy consumption of a network and improve the reliability of the nodes’ reputations by filtering nodes with higher coincidence rates and improving the method used to update the nodes’ communication behaviors. At the same time, the node reputation value, the residual node energy and the transmission delay are combined to formulate a synthetic pheromone that is used in the formula for calculating the random proportion rule in traditional ant-colony optimization to select the optimal data transmission path. Simulation results show that the improved algorithm can increase both the security of data transmission and the quality of routing service. PMID:28282894
Popularity versus similarity in growing networks.
Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, M Ángeles; Boguñá, Marián; Krioukov, Dmitri
2012-09-27
The principle that 'popularity is attractive' underlies preferential attachment, which is a common explanation for the emergence of scaling in growing networks. If new connections are made preferentially to more popular nodes, then the resulting distribution of the number of connections possessed by nodes follows power laws, as observed in many real networks. Preferential attachment has been directly validated for some real networks (including the Internet), and can be a consequence of different underlying processes based on node fitness, ranking, optimization, random walks or duplication. Here we show that popularity is just one dimension of attractiveness; another dimension is similarity. We develop a framework in which new connections optimize certain trade-offs between popularity and similarity, instead of simply preferring popular nodes. The framework has a geometric interpretation in which popularity preference emerges from local optimization. As opposed to preferential attachment, our optimization framework accurately describes the large-scale evolution of technological (the Internet), social (trust relationships between people) and biological (Escherichia coli metabolic) networks, predicting the probability of new links with high precision. The framework that we have developed can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.
EUVL back-insertion layout optimization
NASA Astrophysics Data System (ADS)
Civay, D.; Laffosse, E.; Chesneau, A.
2018-03-01
Extreme ultraviolet lithography (EUVL) is targeted for front-up insertion at advanced technology nodes but will be evaluated for back insertion at more mature nodes. EUVL can put two or more mask levels back on one mask, depending upon what level(s) in the process insertion occurs. In this paper, layout optimization methods are discussed that can be implemented when EUVL back insertion is implemented. The layout optimizations can be focused on improving yield, reliability or density, depending upon the design needs. The proposed methodology modifies the original two or more colored layers and generates an optimized single color EUVL layout design.
Distributed Clone Detection in Static Wireless Sensor Networks: Random Walk with Network Division
Khan, Wazir Zada; Aalsalem, Mohammed Y.; Saad, N. M.
2015-01-01
Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads. PMID:25992913
Yang, Lin; Xiong, Zhenchong; Xie, Qiankun; He, Wenzhuo; Liu, Shousheng; Kong, Pengfei; Jiang, Chang; Guo, Guifang; Xia, Liangping
2018-05-11
The consensus is that a minimum of 12 lymph nodes should be analyzed at colectomy for colon cancer. However, right colon cancer and left colon cancer have different characteristics, and this threshold value for total number of lymph nodes retrieved may not be universally applicable. The data of 63,243 patients with colon cancer treated between 2004 and 2012 were retrieved from the National Cancer Institute's Surveillance, Epidemiology, and End Results database. Multivariate Cox regression analysis was used to determine the predictive value of total number of lymph nodes for survival after adjusting for lymph nodes ratio. The predictive value in left-sided colon cancer and right-sided colon cancer was compared. The optimal total number of lymph nodes cutoff value for prediction of overall survival was identified using the online tool Cutoff Finder. Survival of patients with high total number of lymph nodes (≥12) and low total number of lymph nodes (< 12) was compared by Kaplan-Meier analysis. After stratifying by lymph nodes ratio status, total number of lymph nodes≥12 remained an independent predictor of survival in the whole cohort and in right-sided colon cancer, but not in left-sided colon cancer. The optimal cutoff value for total number of lymph nodes was determined to be 11. Low total number of lymph nodes (< 11) was associated with significantly poorer survival after adjusting for lymph nodes ratio in all subgroups except in the subgroup with high lymph nodes ratio (0.5-1.0). Previous reports of the prognostic significance of total number of lymph nodes on node-positive colon cancer were confounded by lymph nodes ratio. The 12-node standard for total number of lymph nodes may not be equally applicable in right-sided colon cancer and left-sided colon cancer.
Node-making process in network meta-analysis of nonpharmacological treatment are poorly reported.
James, Arthur; Yavchitz, Amélie; Ravaud, Philippe; Boutron, Isabelle
2018-05-01
To identify methods to support the node-making process in network meta-analyses (NMAs) of nonpharmacological treatments. We proceeded in two stages. First, we conducted a literature review of guidelines and methodological articles about NMAs to identify methods proposed to lump interventions into nodes. Second, we conducted a systematic review of NMAs of nonpharmacological treatments to extract methods used by authors to support their node-making process. MEDLINE and Google Scholar were searched to identify articles assessing NMA guidelines or methodology intended for NMA authors. MEDLINE, CENTRAL, and EMBASE were searched to identify reports of NMAs including at least one nonpharmacological treatment. Both searches involved articles available from database inception to March 2016. From the methodological review, we identified and extracted methods proposed to lump interventions into nodes. From the systematic review, the reporting of the network was assessed as long as the method described supported the node-making process. Among the 116 articles retrieved in the literature review, 12 (10%) discussed the concept of lumping or splitting interventions in NMAs. No consensual method was identified during the methodological review, and expert consensus was the only method proposed to support the node-making process. Among 5187 references for the systematic review, we included 110 reports of NMAs published between 2007 and 2016. The nodes were described in the introduction section of 88 reports (80%), which suggested that the node content might have been a priori decided before the systematic review. Nine reports (8.1%) described a specific process or justification to build nodes for the network. Two methods were identified: (1) fit a previously published classification and (2) expert consensus. Despite the importance of NMA in the delivery of evidence when several interventions are available for a single indication, recommendations on the reporting of the node-making process in NMAs are lacking, and reporting of the node-making process in NMAs seems insufficient. Copyright © 2017 Elsevier Inc. All rights reserved.
Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing
2017-04-20
The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.
A coherent Ising machine for 2000-node optimization problems
NASA Astrophysics Data System (ADS)
Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki
2016-11-01
The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.
Optimal Network-based Intervention in the Presence of Undetectable Viruses.
Youssef, Mina; Scoglio, Caterina
2014-08-01
This letter presents an optimal control framework to reduce the spread of viruses in networks. The network is modeled as an undirected graph of nodes and weighted links. We consider the spread of viruses in a network as a system, and the total number of infected nodes as the state of the system, while the control function is the weight reduction leading to slow/reduce spread of viruses. Our epidemic model overcomes three assumptions that were extensively used in the literature and produced inaccurate results. We apply the optimal control formulation to crucial network structures. Numerical results show the dynamical weight reduction and reveal the role of the network structure and the epidemic model in reducing the infection size in the presence of indiscernible infected nodes.
Optimal Network-based Intervention in the Presence of Undetectable Viruses
Youssef, Mina; Scoglio, Caterina
2014-01-01
This letter presents an optimal control framework to reduce the spread of viruses in networks. The network is modeled as an undirected graph of nodes and weighted links. We consider the spread of viruses in a network as a system, and the total number of infected nodes as the state of the system, while the control function is the weight reduction leading to slow/reduce spread of viruses. Our epidemic model overcomes three assumptions that were extensively used in the literature and produced inaccurate results. We apply the optimal control formulation to crucial network structures. Numerical results show the dynamical weight reduction and reveal the role of the network structure and the epidemic model in reducing the infection size in the presence of indiscernible infected nodes. PMID:25422579
TH-E-BRF-01: Exploiting Tumor Shrinkage in Split-Course Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J; Craft, D; Hong, T
2014-06-15
Purpose: In split-course radiotherapy, a patient is treated in several stages separated by weeks or months. This regimen has been motivated by radiobiological considerations. However, using modern image-guidance, it also provides an approach to reduce normal tissue dose by exploiting tumor shrinkage. In this work, we consider the optimal design of split-course treatments, motivated by the clinical management of large liver tumors for which normal liver dose constraints prohibit the administration of an ablative radiation dose in a single treatment. Methods: We introduce a dynamic tumor model that incorporates three factors: radiation induced cell kill, tumor shrinkage, and tumor cellmore » repopulation. The design of splitcourse radiotherapy is formulated as a mathematical optimization problem in which the total dose to the liver is minimized, subject to delivering the prescribed dose to the tumor. Based on the model, we gain insight into the optimal administration of radiation over time, i.e. the optimal treatment gaps and dose levels. Results: We analyze treatments consisting of two stages in detail. The analysis confirms the intuition that the second stage should be delivered just before the tumor size reaches a minimum and repopulation overcompensates shrinking. Furthermore, it was found that, for a large range of model parameters, approximately one third of the dose should be delivered in the first stage. The projected benefit of split-course treatments in terms of liver sparing depends on model assumptions. However, the model predicts large liver dose reductions by more than a factor of two for plausible model parameters. Conclusion: The analysis of the tumor model suggests that substantial reduction in normal tissue dose can be achieved by exploiting tumor shrinkage via an optimal design of multi-stage treatments. This suggests taking a fresh look at split-course radiotherapy for selected disease sites where substantial tumor regression translates into reduced target volumes.« less
NASA Astrophysics Data System (ADS)
Majewski, Kurt
2018-03-01
Exact solutions of the Bloch equations with T1 - and T2 -relaxation terms for piecewise constant magnetic fields are numerically challenging. We therefore investigate an approximation for the achieved magnetization in which rotations and relaxations are split into separate operations. We develop an estimate for its accuracy and explicit first and second order derivatives with respect to the complex excitation radio frequency voltages. In practice, the deviation between an exact solution of the Bloch equations and this rotation relaxation splitting approximation seems negligible. Its computation times are similar to exact solutions without relaxation terms. We apply the developed theory to numerically optimize radio frequency excitation waveforms with T1 - and T2 -relaxations in several examples.
Operator splitting method for simulation of dynamic flows in natural gas pipeline networks
Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.; ...
2017-09-19
Here, we develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme ismore » unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.« less
R. Y. Chen; Gu, G. D.; Chen, Z. G.; ...
2015-10-22
We present a magnetoinfrared spectroscopy study on a newly identified three-dimensional (3D) Dirac semimetal ZrTe 5. We observe clear transitions between Landau levels and their further splitting under a magnetic field. Both the sequence of transitions and their field dependence follow quantitatively the relation expected for 3D massless Dirac fermions. The measurement also reveals an exceptionally low magnetic field needed to drive the compound into its quantum limit, demonstrating that ZrTe 5 is an extremely clean system and ideal platform for studying 3D Dirac fermions. The splitting of the Landau levels provides direct, bulk spectroscopic evidence that a relatively weakmore » magnetic field can produce a sizable Zeeman effect on the 3D Dirac fermions, which lifts the spin degeneracy of Landau levels. As a result, our analysis indicates that the compound evolves from a Dirac semimetal into a topological line-node semimetal under the current magnetic field configuration.« less
NASA Astrophysics Data System (ADS)
Hooda, Nikhil; Damani, Om
2017-06-01
The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.
Optimal synchronization in space
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-02-01
In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.
Koopman, Daniëlle; van Dalen, Jorn A; Arkies, Hester; Oostdijk, Ad H J; Francken, Anne Brecht; Bart, Jos; Slump, Cornelis H; Knollema, Siert; Jager, Pieter L
2018-01-16
We evaluated the diagnostic implications of a small-voxel reconstruction for lymph node characterization in breast cancer patients, using state-of-the-art FDG-PET/CT. We included 69 FDG-PET/CT scans from breast cancer patients. PET data were reconstructed using standard 4 × 4 × 4 mm 3 and small 2 × 2 × 2 mm 3 voxels. Two hundred thirty loco-regional lymph nodes were included, of which 209 nodes were visualised on PET/CT. All nodes were visually scored as benign or malignant, and SUV max and TB ratio (=SUV max /SUV background ) were measured. Final diagnosis was based on histological or imaging information. We determined the accuracy, sensitivity and specificity for both reconstruction methods and calculated optimal cut-off values to distinguish benign from malignant nodes. Sixty-one benign and 169 malignant lymph nodes were included. Visual evaluation accuracy was 73% (sensitivity 67%, specificity 89%) on standard-voxel images and 77% (sensitivity 78%, specificity 74%) on small-voxel images (p = 0.13). Across malignant nodes visualised on PET/CT, the small-voxel score was more often correct compared with the standard-voxel score (89 vs. 76%, p < 0.001). In benign nodes, the standard-voxel score was more often correct (89 vs. 74%, p = 0.04). Quantitative data were based on the 61 benign and 148 malignant lymph nodes visualised on PET/CT. SUVs and TB ratio were on average 3.0 and 1.6 times higher in malignant nodes compared to those in benign nodes (p < 0.001), on standard- and small-voxel PET images respectively. Small-voxel PET showed average increases in SUV max and TB ratio of typically 40% over standard-voxel PET. The optimal SUV max cut-off using standard-voxels was 1.8 (sensitivity 81%, specificity 95%, accuracy 85%) while for small-voxels, the optimal SUV max cut-off was 2.6 (sensitivity 78%, specificity 98%, accuracy 84%). Differences in accuracy were non-significant. Small-voxel PET/CT improves the sensitivity of visual lymph node characterization and provides a higher detection rate of malignant lymph nodes. However, small-voxel PET/CT also introduced more false-positive results in benign nodes. Across all nodes, differences in accuracy were non-significant. Quantitatively, small-voxel images require higher cut-off values. Readers have to adapt their reference standards.
An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments
Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L.; de Carvalho, Carlos Giovanni N.; Mendes, Douglas Lopes de S.; Costa, Valney da Gama
2018-01-01
Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user’s queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user’s queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios. PMID:29495406
Moving target tracking through distributed clustering in directional sensor networks.
Enayet, Asma; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-12-18
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.
Moving Target Tracking through Distributed Clustering in Directional Sensor Networks
Enayet, Asma; Razzaque, Md. Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-01-01
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works. PMID:25529205
An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.
Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama
2018-02-26
Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.
Network reconstruction via graph blending
NASA Astrophysics Data System (ADS)
Estrada, Rolando
2016-05-01
Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.
Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks
Jiang, Peng; Wang, Xingmin; Jiang, Lurong
2015-01-01
Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209
Shah, Peer Azmat; Hasbullah, Halabi B; Lawal, Ibrahim A; Aminu Mu'azu, Abubakar; Tang Jung, Low
2014-01-01
Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO).
28nm node process optimization: a lithography centric view
NASA Astrophysics Data System (ADS)
Seltmann, Rolf
2014-10-01
Many experts claim that the 28nm technology node will be the most cost effective technology node forever. This results from primarily from the cost of manufacturing due to the fact that 28nm is the last true Single Patterning (SP) node. It is also affected by the dramatic increase of design costs and the limited shrink factor of the next following nodes. Thus, it is assumed that this technology still will be alive still for many years. To be cost competitive, high yields are mandatory. Meanwhile, leading edge foundries have optimized the yield of the 28nm node to such a level that that it is nearly exclusively defined by random defectivity. However, it was a long way to go to come to that level. In my talk I will concentrate on the contribution of lithography to this yield learning curve. I will choose a critical metal patterning application. I will show what was needed to optimize the process window to a level beyond the usual OPC model work that was common on previous nodes. Reducing the process (in particular focus) variability is a complementary need. It will be shown which improvements were needed in tooling, process control and design-mask-wafer interaction to remove all systematic yield detractors. Over the last couple of years new scanner platforms were introduced that were targeted for both better productivity and better parametric performance. But this was not a clear run-path. It needed some extra affords of the tool suppliers together with the Fab to bring the tool variability down to the necessary level. Another important topic to reduce variability is the interaction of wafer none-planarity and lithography optimization. Having an accurate knowledge of within die topography is essential for optimum patterning. By completing both the variability reduction work and the process window enhancement work we were able to transfer the original marginal process budget to a robust positive budget and thus ensuring high yield and low costs.
Irradiation of the prostate and pelvic lymph nodes with an adaptive algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, A. B.; Chen, J.; Nguyen, T. B.
2012-02-15
Purpose: The simultaneous treatment of pelvic lymph nodes and the prostate in radiotherapy for prostate cancer is complicated by the independent motion of these two target volumes. In this work, the authors study a method to adapt intensity modulated radiation therapy (IMRT) treatment plans so as to compensate for this motion by adaptively morphing the multileaf collimator apertures and adjusting the segment weights. Methods: The study used CT images, tumor volumes, and normal tissue contours from patients treated in our institution. An IMRT treatment plan was then created using direct aperture optimization to deliver 45 Gy to the pelvic lymphmore » nodes and 50 Gy to the prostate and seminal vesicles. The prostate target volume was then shifted in either the anterior-posterior direction or in the superior-inferior direction. The treatment plan was adapted by adjusting the aperture shapes with or without re-optimizing the segment weighting. The dose to the target volumes was then determined for the adapted plan. Results: Without compensation for prostate motion, 1 cm shifts of the prostate resulted in an average decrease of 14% in D-95%. If the isocenter is simply shifted to match the prostate motion, the prostate receives the correct dose but the pelvic lymph nodes are underdosed by 14% {+-} 6%. The use of adaptive morphing (with or without segment weight optimization) reduces the average change in D-95% to less than 5% for both the pelvic lymph nodes and the prostate. Conclusions: Adaptive morphing with and without segment weight optimization can be used to compensate for the independent motion of the prostate and lymph nodes when combined with daily imaging or other methods to track the prostate motion. This method allows the delivery of the correct dose to both the prostate and lymph nodes with only small changes to the dose delivered to the target volumes.« less
Optimization of robustness of interdependent network controllability by redundant design
2018-01-01
Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426
Distributed Channel Allocation and Time Slot Optimization for Green Internet of Things.
Ding, Kaiqi; Zhao, Haitao; Hu, Xiping; Wei, Jibo
2017-10-28
In sustainable smart cities, power saving is a severe challenge in the energy-constrained Internet of Things (IoT). Efficient utilization of limited multiple non-overlap channels and time resources is a promising solution to reduce the network interference and save energy consumption. In this paper, we propose a joint channel allocation and time slot optimization solution for IoT. First, we propose a channel ranking algorithm which enables each node to rank its available channels based on the channel properties. Then, we propose a distributed channel allocation algorithm so that each node can choose a proper channel based on the channel ranking and its own residual energy. Finally, the sleeping duration and spectrum sensing duration are jointly optimized to maximize the normalized throughput and satisfy energy consumption constraints simultaneously. Different from the former approaches, our proposed solution requires no central coordination or any global information that each node can operate based on its own local information in a total distributed manner. Also, theoretical analysis and extensive simulations have validated that when applying our solution in the network of IoT: (i) each node can be allocated to a proper channel based on the residual energy to balance the lifetime; (ii) the network can rapidly converge to a collision-free transmission through each node's learning ability in the process of the distributed channel allocation; and (iii) the network throughput is further improved via the dynamic time slot optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Victor, E-mail: vhernandezmasgrau@gmail.com; Arenas, Meritxell; Müller, Katrin
2013-01-01
To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statisticalmore » analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.« less
A New Approach to Design Autonomous Wireless Sensor Node Based on RF Energy Harvesting System.
Mouapi, Alex; Hakem, Nadir
2018-01-05
Energy Harvesting techniques are increasingly seen as the solution for freeing the wireless sensor nodes from their battery dependency. However, it remains evident that network performance features, such as network size, packet length, and duty cycle, are influenced by the sum of recovered energy. This paper proposes a new approach to defining the specifications of a stand-alone wireless node based on a Radio-frequency Energy Harvesting System (REHS). To achieve adequate performance regarding the range of the Wireless Sensor Network (WSN), techniques for minimizing the energy consumed by the sensor node are combined with methods for optimizing the performance of the REHS. For more rigor in the design of the autonomous node, a comprehensive energy model of the node in a wireless network is established. For an equitable distribution of network charges between the different nodes that compose it, the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol is used for this purpose. The model considers five energy-consumption sources, most of which are ignored in recently used models. By using the hardware parameters of commercial off-the-shelf components (Mica2 Motes and CC2520 of Texas Instruments), the energy requirement of a sensor node is quantified. A miniature REHS based on a judicious choice of rectifying diodes is then designed and developed to achieve optimal performance in the Industrial Scientific and Medical (ISM) band centralized at 2.45 GHz . Due to the mismatch between the REHS and the antenna, a band pass filter is designed to reduce reflection losses. A gradient method search is used to optimize the output characteristics of the adapted REHS. At 1 mW of input RF power, the REHS provides an output DC power of 0.57 mW and a comparison with the energy requirement of the node allows the Base Station (BS) to be located at 310 m from the wireless nodes when the Wireless Sensor Network (WSN) has 100 nodes evenly spread over an area of 300 × 300 m 2 and when each round lasts 10 min . The result shows that the range of the autonomous WSN increases when the controlled physical phenomenon varies very slowly. Having taken into account all the dissipation sources coexisting in a sensor node and using actual measurements of an REHS, this work provides the guidelines for the design of autonomous nodes based on REHS.
Optimal energy-splitting method for an open-loop liquid crystal adaptive optics system.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Liu, Yonggang; Peng, Zenghui; Yang, Qingyun; Meng, Haoran; Yao, Lishuang; Xuan, Li
2012-08-13
A waveband-splitting method is proposed for open-loop liquid crystal adaptive optics systems (LC AOSs). The proposed method extends the working waveband, splits energy flexibly, and improves detection capability. Simulated analysis is performed for a waveband in the range of 350 nm to 950 nm. The results show that the optimal energy split is 7:3 for the wavefront sensor (WFS) and for the imaging camera with the waveband split into 350 nm to 700 nm and 700 nm to 950 nm, respectively. A validation experiment is conducted by measuring the signal-to-noise ratio (SNR) of the WFS and the imaging camera. The results indicate that for the waveband-splitting method, the SNR of WFS is approximately equal to that of the imaging camera with a variation in the intensity. On the other hand, the SNR of the WFS is significantly different from that of the imaging camera for the polarized beam splitter energy splitting scheme. Therefore, the waveband-splitting method is more suitable for an open-loop LC AOS. An adaptive correction experiment is also performed on a 1.2-meter telescope. A star with a visual magnitude of 4.45 is observed and corrected and an angular resolution ability of 0.31″ is achieved. A double star with a combined visual magnitude of 4.3 is observed as well, and its two components are resolved after correction. The results indicate that the proposed method can significantly improve the detection capability of an open-loop LC AOS.
Penile Cancer: Contemporary Lymph Node Management.
O'Brien, Jonathan S; Perera, Marlon; Manning, Todd; Bozin, Mike; Cabarkapa, Sonja; Chen, Emily; Lawrentschuk, Nathan
2017-06-01
In penile cancer, the optimal diagnostics and management of metastatic lymph nodes are not clear. Advances in minimally invasive staging, including dynamic sentinel lymph node biopsy, have widened the diagnostic repertoire of the urologist. We aimed to provide an objective update of the recent trends in the management of penile squamous cell carcinoma, and inguinal and pelvic lymph node metastases. We systematically reviewed several medical databases, including the Web of Science® (with MEDLINE®), Embase® and Cochrane databases, according to PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analyses) guidelines. The search terms used were penile cancer, lymph node, sentinel node, minimally invasive, surgery and outcomes, alone and in combination. Articles pertaining to the management of lymph nodes in penile cancer were reviewed, including original research, reviews and clinical guidelines published between 1980 and 2016. Accurate and minimally invasive lymph node staging is of the utmost importance in the surgical management of penile squamous cell carcinoma. In patients with clinically node negative disease, a growing body of evidence supports the use of sentinel lymph node biopsies. Dynamic sentinel lymph node biopsy exposes the patient to minimal risk, and results in superior sensitivity and specificity profiles compared to alternate nodal staging techniques. In the presence of locoregional disease, improvements in inguinal or pelvic lymphadenectomy have reduced morbidity and improved oncologic outcomes. A multimodal approach of chemotherapy and surgery has demonstrated a survival benefit for patients with advanced disease. Recent developments in lymph node management have occurred in penile cancer, such as minimally invasive lymph node diagnosis and intervention strategies. These advances have been met with a degree of controversy in the contemporary literature. Current data suggest that dynamic sentinel lymph node biopsy provides excellent sensitivity and specificity for detecting lymph node metastases. More robust long-term data on multicenter patient cohorts are required to determine the optimal management of lymph nodes in penile cancer. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Design technology co-optimization for 14/10nm metal1 double patterning layer
NASA Astrophysics Data System (ADS)
Duan, Yingli; Su, Xiaojing; Chen, Ying; Su, Yajuan; Shao, Feng; Zhang, Recco; Lei, Junjiang; Wei, Yayi
2016-03-01
Design and technology co-optimization (DTCO) can satisfy the needs of the design, generate robust design rule, and avoid unfriendly patterns at the early stage of design to ensure a high level of manufacturability of the product by the technical capability of the present process. The DTCO methodology in this paper includes design rule translation, layout analysis, model validation, hotspots classification and design rule optimization mainly. The correlation of the DTCO and double patterning (DPT) can optimize the related design rule and generate friendlier layout which meets the requirement of the 14/10nm technology node. The experiment demonstrates the methodology of DPT-compliant DTCO which is applied to a metal1 layer from the 14/10nm node. The DTCO workflow proposed in our job is an efficient solution for optimizing the design rules for 14/10 nm tech node Metal1 layer. And the paper also discussed and did the verification about how to tune the design rule of the U-shape and L-shape structures in a DPT-aware metal layer.
Wang, Xue; Wang, Sheng; Ma, Jun-Jie
2007-01-01
The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.
Fog computing job scheduling optimization based on bees swarm
NASA Astrophysics Data System (ADS)
Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid
2018-04-01
Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.
Design and optimization of color lookup tables on a simplex topology.
Monga, Vishal; Bala, Raja; Mo, Xuan
2012-04-01
An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.
On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model
Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco
2015-01-01
Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called “anchor” nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network. PMID:26134104
On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model.
Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco
2015-06-30
Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called "anchor" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.
Genetic design of enhanced valley splitting towards a spin qubit in silicon
Zhang, Lijun; Luo, Jun-Wei; Saraiva, Andre; Koiller, Belita; Zunger, Alex
2013-01-01
The long spin coherence time and microelectronics compatibility of Si makes it an attractive material for realizing solid-state qubits. Unfortunately, the orbital (valley) degeneracy of the conduction band of bulk Si makes it difficult to isolate individual two-level spin-1/2 states, limiting their development. This degeneracy is lifted within Si quantum wells clad between Ge-Si alloy barrier layers, but the magnitude of the valley splittings achieved so far is small—of the order of 1 meV or less—degrading the fidelity of information stored within such a qubit. Here we combine an atomistic pseudopotential theory with a genetic search algorithm to optimize the structure of layered-Ge/Si-clad Si quantum wells to improve this splitting. We identify an optimal sequence of multiple Ge/Si barrier layers that more effectively isolates the electron ground state of a Si quantum well and increases the valley splitting by an order of magnitude, to ∼9 meV. PMID:24013452
NASA Technical Reports Server (NTRS)
Hotchkiss, G. B.; Burmeister, L. C.; Bishop, K. A.
1980-01-01
A discrete-gradient optimization algorithm is used to identify the parameters in a one-node and a two-node capacitance model of a flat-plate collector. Collector parameters are first obtained by a linear-least-squares fit to steady state data. These parameters, together with the collector heat capacitances, are then determined from unsteady data by use of the discrete-gradient optimization algorithm with less than 10 percent deviation from the steady state determination. All data were obtained in the indoor solar simulator at the NASA Lewis Research Center.
Spectral splitting for thermal management in photovoltaic cells
NASA Astrophysics Data System (ADS)
Apostoleris, Harry; Chiou, Yu-Cheng; Chiesa, Matteo; Almansouri, Ibraheem
2017-09-01
Spectral splitting is widely employed as a way to divide light between different solar cells or processes to optimize energy conversion. Well-understood but less explored is the use of spectrum splitting or filtering to combat solar cell heating. This has impacts both on cell performance and on the surrounding environment. In this manuscript we explore the design of spectral filtering systems that can improve the thermal and power-conversion performance of commercial PV modules.
A connectionist model for diagnostic problem solving
NASA Technical Reports Server (NTRS)
Peng, Yun; Reggia, James A.
1989-01-01
A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.
Design and optimization of cascaded DCG based holographic elements for spectrum-splitting PV systems
NASA Astrophysics Data System (ADS)
Wu, Yuechen; Chrysler, Benjamin; Pelaez, Silvana Ayala; Kostuk, Raymond K.
2017-09-01
In this work, the technique of designing and optimizing broadband volume transmission holograms using dichromate gelatin (DCG) is summarized for solar spectrum-splitting application. Spectrum splitting photovoltaic system uses a series of single bandgap PV cells that have different spectral conversion efficiency properties to more fully utilize the solar spectrum. In such a system, one or more high performance optical filters are usually required to split the solar spectrum and efficiently send them to the corresponding PV cells. An ideal spectral filter should have a rectangular shape with sharp transition wavelengths. DCG is a near ideal holographic material for solar applications as it can achieve high refractive index modulation, low absorption and scattering properties and long-term stability to solar exposure after sealing. In this research, a methodology of designing and modeling a transmission DCG hologram using coupled wave analysis for different PV bandgap combinations is described. To achieve a broad diffraction bandwidth and sharp cut-off wavelength, a cascaded structure of multiple thick holograms is described. A search algorithm is also developed to optimize both single and two-layer cascaded holographic spectrum splitters for the best bandgap combinations of two- and three-junction SSPV systems illuminated under the AM1.5 solar spectrum. The power conversion efficiencies of the optimized systems under the AM1.5 solar spectrum are then calculated using the detailed balance method, and shows an improvement compared with tandem structure.
Underwater Sensor Network Redeployment Algorithm Based on Wolf Search
Jiang, Peng; Feng, Yang; Wu, Feng
2016-01-01
This study addresses the optimization of node redeployment coverage in underwater wireless sensor networks. Given that nodes could easily become invalid under a poor environment and the large scale of underwater wireless sensor networks, an underwater sensor network redeployment algorithm was developed based on wolf search. This study is to apply the wolf search algorithm combined with crowded degree control in the deployment of underwater wireless sensor networks. The proposed algorithm uses nodes to ensure coverage of the events, and it avoids the prematurity of the nodes. The algorithm has good coverage effects. In addition, considering that obstacles exist in the underwater environment, nodes are prevented from being invalid by imitating the mechanism of avoiding predators. Thus, the energy consumption of the network is reduced. Comparative analysis shows that the algorithm is simple and effective in wireless sensor network deployment. Compared with the optimized artificial fish swarm algorithm, the proposed algorithm exhibits advantages in network coverage, energy conservation, and obstacle avoidance. PMID:27775659
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Nodal bilayer-splitting controlled by spin-orbit interactions in underdoped high-T c cuprates
Harrison, N.; Ramshaw, B. J.; Shekhter, A.
2015-06-03
The highest superconducting transition temperatures in the cuprates are achieved in bilayer and trilayer systems, highlighting the importance of interlayer interactions for high T c. It has been argued that interlayer hybridization vanishes along the nodal directions by way of a specific pattern of orbital overlap. Recent quantum oscillation measurements in bilayer cuprates have provided evidence for a residual bilayer-splitting at the nodes that is sufficiently small to enable magnetic breakdown tunneling at the nodes. Here we show that several key features of the experimental data can be understood in terms of weak spin-orbit interactions naturally present in bilayer systems,more » whose primary effect is to cause the magnetic breakdown to be accompanied by a spin flip. These features can now be understood to include the equidistant set of three quantum oscillation frequencies, the asymmetry of the quantum oscillation amplitudes in c-axis transport compared to ab-plane transport, and the anomalous magnetic field angle dependence of the amplitude of the side frequencies suggestive of small effective g-factors. We suggest that spin-orbit interactions in bilayer systems can further affect the structure of the nodal quasiparticle spectrum in the superconducting phase. PACS numbers: 71.45.Lr, 71.20.Ps, 71.18.+y« less
Arun, V. V.; Saharan, Neelam; Ramasubramanian, V.; Babitha Rani, A. M.; Salin, K. R.; Sontakke, Ravindra; Haridas, Harsha; Pazhayamadom, Deepak George
2017-01-01
A novel method, BBD-SSPD is proposed by the combination of Box-Behnken Design (BBD) and Split-Split Plot Design (SSPD) which would ensure minimum number of experimental runs, leading to economical utilization in multi- factorial experiments. The brine shrimp Artemia was tested to study the combined effects of photoperiod, temperature and salinity, each with three levels, on the hatching percentage and hatching time of their cysts. The BBD was employed to select 13 treatment combinations out of the 27 possible combinations that were grouped in an SSPD arrangement. Multiple responses were optimized simultaneously using Derringer’s desirability function. Photoperiod and temperature as well as temperature-salinity interaction were found to significantly affect the hatching percentage of Artemia, while the hatching time was significantly influenced by photoperiod and temperature, and their interaction. The optimum conditions were 23 h photoperiod, 29 °C temperature and 28 ppt salinity resulting in 96.8% hatching in 18.94 h. In order to verify the results obtained from BBD-SSPD experiment, the experiment was repeated preserving the same set up. Results of verification experiment were found to be similar to experiment originally conducted. It is expected that this method would be suitable to optimize the hatching process of animal eggs. PMID:28091611
NASA Astrophysics Data System (ADS)
Saykin, D. R.; Tikhonov, K. S.; Rodionov, Ya. I.
2018-01-01
We study the Landau levels (LLs) of a Weyl semimetal with two adjacent Weyl nodes. We consider different orientations η =∠ (B ,k0) of magnetic field B with respect to k0, the vector of Weyl node splitting. A magnetic field facilitates the tunneling between the nodes, giving rise to a gap in the transverse energy of the zeroth LL. We show how the spectrum is rearranged at different η and how this manifests itself in the change of behavior of the differential magnetoconductance d G (B )/d B of a ballistic p -n junction. Unlike the single-cone model where Klein tunneling reveals itself in positive d G (B )/d B , in the two-cone case, G (B ) is nonmonotonic with a maximum at Bc∝Φ0k02/ln(k0lE) for large k0lE , where lE=√{ℏ v /|e |E } , with E for the built-in electric field and Φ0 for the magnetic flux quantum.
Optimal allocation of resources for suppressing epidemic spreading on networks
NASA Astrophysics Data System (ADS)
Chen, Hanshuang; Li, Guofeng; Zhang, Haifeng; Hou, Zhonghuai
2017-07-01
Efficient allocation of limited medical resources is crucial for controlling epidemic spreading on networks. Based on the susceptible-infected-susceptible model, we solve the optimization problem of how best to allocate the limited resources so as to minimize prevalence, providing that the curing rate of each node is positively correlated to its medical resource. By quenched mean-field theory and heterogeneous mean-field (HMF) theory, we prove that an epidemic outbreak will be suppressed to the greatest extent if the curing rate of each node is directly proportional to its degree, under which the effective infection rate λ has a maximal threshold λcopt=1 /
Lerut, J; de Ville de Goyet, J; Donataccio, M; Reding, R; Otte, J B
1994-11-01
Split liver grafting has not gained wide acceptance mainly because of different vascular and biliary technical problems. A new technique of right split liver transplantation is described. The piggyback implantation technique, using wide side-to-side cavocavostomy overcomes problems encountered when sharing the superhepatic vena cava cuff between two livers and obtains optimal drainage of venous allograft outflow, thus avoiding extensive bleeding at the transection margin. This technique was successfully used in two adult recipients. Piggyback transplantation using wide side-to-side cavocavostomy allows easy and safe implantation of the right split liver allograft.
Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua
2014-01-01
This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties. PMID:24741341
Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua
2014-01-01
This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties.
Na, Y; Suh, T; Xing, L
2012-06-01
Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.
Ali, Arif N; Switchenko, Jeffrey M; Kim, Sungjin; Kowalski, Jeanne; El-Deiry, Mark W; Beitler, Jonathan J
2014-11-15
The current study was conducted to develop a multifactorial statistical model to predict the specific head and neck (H&N) tumor site origin in cases of squamous cell carcinoma confined to the cervical lymph nodes ("unknown primaries"). The Surveillance, Epidemiology, and End Results (SEER) database was analyzed for patients with an H&N tumor site who were diagnosed between 2004 and 2011. The SEER patients were identified according to their H&N primary tumor site and clinically positive cervical lymph node levels at the time of presentation. The SEER patient data set was randomly divided into 2 data sets for the purposes of internal split-sample validation. The effects of cervical lymph node levels, age, race, and sex on H&N primary tumor site were examined using univariate and multivariate analyses. Multivariate logistic regression models and an associated set of nomograms were developed based on relevant factors to provide probabilities of tumor site origin. Analysis of the SEER database identified 20,011 patients with H&N disease with both site-level and lymph node-level data. Sex, race, age, and lymph node levels were associated with primary H&N tumor site (nasopharynx, hypopharynx, oropharynx, and larynx) in the multivariate models. Internal validation techniques affirmed the accuracy of these models on separate data. The incorporation of epidemiologic and lymph node data into a predictive model has the potential to provide valuable guidance to clinicians in the treatment of patients with squamous cell carcinoma confined to the cervical lymph nodes. © 2014 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Overlapping Community Detection based on Network Decomposition
NASA Astrophysics Data System (ADS)
Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin
2016-04-01
Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.
Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.
Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice
2015-09-04
The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.
The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei
2016-01-01
Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348
NASA Astrophysics Data System (ADS)
Ducoté, Julien; Dettoni, Florent; Bouyssou, Régis; Le-Gratiet, Bertrand; Carau, Damien; Dezauzier, Christophe
2015-03-01
Patterning process control of advanced nodes has required major changes over the last few years. Process control needs of critical patterning levels since 28nm technology node is extremely aggressive showing that metrology accuracy/sensitivity must be finely tuned. The introduction of pitch splitting (Litho-Etch-Litho-Etch) at 14FDSOInm node requires the development of specific metrologies to adopt advanced process control (for CD, overlay and focus corrections). The pitch splitting process leads to final line CD uniformities that are a combination of the CD uniformities of the two exposures, while the space CD uniformities are depending on both CD and OVL variability. In this paper, investigations of CD and OVL process control of 64nm minimum pitch at Metal1 level of 14FDSOI technology, within the double patterning process flow (Litho, hard mask etch, line etch) are presented. Various measurements with SEMCD tools (Hitachi), and overlay tools (KT for Image Based Overlay - IBO, and ASML for Diffraction Based Overlay - DBO) are compared. Metrology targets are embedded within a block instanced several times within the field to perform intra-field process variations characterizations. Specific SEMCD targets were designed for independent measurement of both line CD (A and B) and space CD (A to B and B to A) for each exposure within a single measurement during the DP flow. Based on those measurements correlation between overlay determined with SEMCD and with standard overlay tools can be evaluated. Such correlation at different steps through the DP flow is investigated regarding the metrology type. Process correction models are evaluated with respect to the measurement type and the intra-field sampling.
Nidanapu, Ravi Prasad; Rajan, Sundaram; Mahadevan, Subramanian; Gitanjali, Batmanabane
2016-12-01
Tablet splitting is the process of dividing a tablet into portions to obtain a prescribed dose of medication. Very few studies have investigated whether split parts of a tablet deliver the expected amount of drug to patients. Our objectives were to evaluate the split parts of adult-dose tablet formulations for percentage of weight deviation, weight uniformity, weight loss, drug content, and the content uniformity of four antiepileptic drugs (AEDs) prescribed to pediatric patients. We also measured AED plasma concentrations in the children. We chose to study first-line AEDs (phenytoin sodium [PHE], sodium valproate [SVA], carbamazepine, and phenobarbitone) as they are routinely prescribed in India. We asked caregivers to perform the same splitting process they follow in their homes on three whole tablets during their routine visit to the outpatient department. After caregivers split the tablets, we studied the weight and content of the split parts. We also used high-performance liquid chromatography to study plasma drug concentrations in children who had received split AEDs for at least 4 months. A total of 168 caregivers participated in the study, and we analyzed 1098 split tablet parts. In total, 539 (49.0 %) split parts were above the specified limit of the 2010 Indian Pharmacopeia (IP) acceptable percentage weight deviation (PHE 169 [48.8 %], SVA 187 [51.9 %], carbamazepine 56 [41.1 %], phenobarbitone 127 [49.6 %]); 456 (41.5 %) split parts were outside the proxy IP specification for drug content (PHE 135 [39.0 %], SVA 140 [38.8 %], carbamazepine 51 [37.5 %], phenobarbitone 130 [50.7 %]), and 253 split parts were outside the acceptable content uniformity range of <85 % and >115 % (PHE 85 [24.5 %], SVA 98 [27.2 %], carbamazepine 14 [10.2 %], phenobarbitone 56 [21.8 %]). In total, 130 (72.2 %) patients had plasma drug concentrations outside the therapeutic range (PHE 36 [72.0 %], SVA 39 [78.0 %], carbamazepine 34 [68.0 %], phenobarbitone 21 [70.0 %]). Splitting adult-dosage formulations of AEDs results in patients not receiving the optimal dose. Plasma drug concentrations are also not optimal. Pediatric dosage formulations should be preferred to splitting adult-dosage formulations in pediatric epilepsy.
Shah, Peer Azmat; Hasbullah, Halabi B.; Lawal, Ibrahim A.; Aminu Mu'azu, Abubakar; Tang Jung, Low
2014-01-01
Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO). PMID:24688398
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.
Vimalarani, C; Subramanian, R; Sivanandam, S N
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.
Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-09-15
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks
Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-01-01
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818
A New Approach to Design Autonomous Wireless Sensor Node Based on RF Energy Harvesting System
Hakem, Nadir
2018-01-01
Energy Harvesting techniques are increasingly seen as the solution for freeing the wireless sensor nodes from their battery dependency. However, it remains evident that network performance features, such as network size, packet length, and duty cycle, are influenced by the sum of recovered energy. This paper proposes a new approach to defining the specifications of a stand-alone wireless node based on a Radio-frequency Energy Harvesting System (REHS). To achieve adequate performance regarding the range of the Wireless Sensor Network (WSN), techniques for minimizing the energy consumed by the sensor node are combined with methods for optimizing the performance of the REHS. For more rigor in the design of the autonomous node, a comprehensive energy model of the node in a wireless network is established. For an equitable distribution of network charges between the different nodes that compose it, the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol is used for this purpose. The model considers five energy-consumption sources, most of which are ignored in recently used models. By using the hardware parameters of commercial off-the-shelf components (Mica2 Motes and CC2520 of Texas Instruments), the energy requirement of a sensor node is quantified. A miniature REHS based on a judicious choice of rectifying diodes is then designed and developed to achieve optimal performance in the Industrial Scientific and Medical (ISM) band centralized at 2.45 GHz. Due to the mismatch between the REHS and the antenna, a band pass filter is designed to reduce reflection losses. A gradient method search is used to optimize the output characteristics of the adapted REHS. At 1 mW of input RF power, the REHS provides an output DC power of 0.57 mW and a comparison with the energy requirement of the node allows the Base Station (BS) to be located at 310 m from the wireless nodes when the Wireless Sensor Network (WSN) has 100 nodes evenly spread over an area of 300 × 300 m2 and when each round lasts 10 min. The result shows that the range of the autonomous WSN increases when the controlled physical phenomenon varies very slowly. Having taken into account all the dissipation sources coexisting in a sensor node and using actual measurements of an REHS, this work provides the guidelines for the design of autonomous nodes based on REHS. PMID:29304002
NASA Astrophysics Data System (ADS)
Cowell, Martin Andrew
The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.
Database Design for Structural Analysis and Design Optimization.
1984-10-01
2) . Element number of nodes IELT NPAR(2) " Stress printing flag IPST NPAR(2) Element material angle BETA NPAR(2) Element thickness THICK NPAR(2...number LM 3*NPAR(17)*NPAR(2) Element nodal coordinates XYZ 3*NPAR(17)*NPAR(2) Element number of nodes IELT NPAR(2) Element geometry number of nodes IELTX...D.O.F. number LM 6*NPAR(7)*NPAR(2) Element number of nodes IELT NPAR(2) Material property set number MATP NPAR(2) Material constants PROP NPAR(17
The Staircase and Related Structures in Integer Programming.
1980-06-01
objective value of the incumbent x; 38 zk(i) = optimal objective value of the LP relaxation of node i (descended from subproblem k); GPk (i) = maximum...following fathoming conditions holds at the current node i: (a) the LP relaxation of node i is infeasible. (b) cumobj + FLOOR(Zk(i) + GPk (i)) + maxc(k...zk(i) + GPk (i), where (as before) GPk (i) is the maximum Gomory penalty associated with node i of subproblem k, and zk(i) is the objective value of the
A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.
Zhang, Yu; Zhang, Bing; Zhang, Shi
2017-06-02
Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.
Time-Efficient High-Rate Data Flooding in One-Dimensional Acoustic Underwater Sensor Networks
Kwon, Jae Kyun; Seo, Bo-Min; Yun, Kyungsu; Cho, Ho-Shin
2015-01-01
Because underwater communication environments have poor characteristics, such as severe attenuation, large propagation delays and narrow bandwidths, data is normally transmitted at low rates through acoustic waves. On the other hand, as high traffic has recently been required in diverse areas, high rate transmission has become necessary. In this paper, transmission/reception timing schemes that maximize the time axis use efficiency to improve the resource efficiency for high rate transmission are proposed. The excellence of the proposed scheme is identified by examining the power distributions by node, rate bounds, power levels depending on the rates and number of nodes, and network split gains through mathematical analysis and numerical results. In addition, the simulation results show that the proposed scheme outperforms the existing packet train method. PMID:26528983
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
Preethi, V; Kanmani, S
2016-10-01
Hydrogen production by gas-phase photocatalytic splitting of Hydrogen Sulphide (H2S) was investigated on four semiconductor photocatalysts including CuGa1.6Fe0.4O2, ZnFe2O3, (CdS + ZnS)/Fe2O3 and Ce/TiO2. The CdS and ZnS coated core shell particles (CdS + ZnS)/Fe2O3 shows the highest rate of hydrogen (H2) production under optimized conditions. Packed bed tubular reactor was used to study the performance of prepared photocatalysts. Selection of the best packing material is a key for maximum removal efficiency. Cheap, lightweight and easily adsorbing vermiculate materials were used as a novel packing material and were found to be effective in splitting H2S. Effect of various operating parameters like flow rate, sulphide concentration, catalyst dosage, light irradiation were tested and optimized for maximum H2 conversion of 92% from industrial waste H2S. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jakovetic, Dusan; Xavier, João; Moura, José M. F.
2011-08-01
We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.
Optimal Quantum Spatial Search on Random Temporal Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser
2017-12-01
To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.
Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation
NASA Astrophysics Data System (ADS)
Liu, Hua; Krishnamachari, Bhaskar
2008-08-01
Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.
NASA Astrophysics Data System (ADS)
Zheng, Yan
2015-03-01
Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Towards a hybrid energy efficient multi-tree-based optimized routing protocol for wireless networks.
Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan
2012-12-13
This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm.
Towards a Hybrid Energy Efficient Multi-Tree-Based Optimized Routing Protocol for Wireless Networks
Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan
2012-01-01
This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm. PMID:23443398
Three faces of node importance in network epidemiology: Exact results for small graphs
NASA Astrophysics Data System (ADS)
Holme, Petter
2017-12-01
We investigate three aspects of the importance of nodes with respect to susceptible-infectious-removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size), and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that (i) node separation is more important than centrality for more than one active node, (ii) vaccination and influence maximization are the most different aspects of importance, and (iii) the three aspects are more similar when the infection rate is low.
Analysis of complex network performance and heuristic node removal strategies
NASA Astrophysics Data System (ADS)
Jahanpour, Ehsan; Chen, Xin
2013-12-01
Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.
Merkel cell carcinoma: An algorithm for multidisciplinary management and decision-making.
Prieto, Isabel; Pérez de la Fuente, Teresa; Medina, Susana; Castelo, Beatriz; Sobrino, Beatriz; Fortes, Jose R; Esteban, David; Cassinello, Fernando; Jover, Raquel; Rodríguez, Nuria
2016-02-01
Merkel cell carcinoma (MCC) is a rare and aggressive neuroendocrine tumor of the skin. Therapeutic approach is often unclear, and considerable controversy exists regarding MCC pathogenesis and optimal management. Due to its rising incidence and poor prognosis, it is imperative to establish the optimal therapy for both the tumor and the lymph node basin, and for treatment to include sentinel node biopsy. Sentinel node biopsy is currently the most consistent predictor of survival for MCC patients, although there are conflicting views and a lack of awareness regarding node management. Tumor and node management involve different specialists, and their respective decisions and interventions are interrelated. No effective systemic treatment has been made available to date, and therefore patients continue to experience distant failure, often without local failure. This review aims to improve multidisciplinary decision-making by presenting scientific evidence of the contributions of each team member implicated in MCC management. Following this review of previously published research, the authors conclude that multidisciplinary team management is beneficial for care, and propose a multidisciplinary decision algorithm for managing this tumor. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Near infrared imaging to identify sentinel lymph nodes in invasive urinary bladder cancer
NASA Astrophysics Data System (ADS)
Knapp, Deborah W.; Adams, Larry G.; Niles, Jacqueline D.; Lucroy, Michael D.; Ramos-Vara, Jose; Bonney, Patty L.; deGortari, Amalia E.; Frangioni, John V.
2006-02-01
Approximately 12,000 people are diagnosed with invasive transitional cell carcinoma of the urinary bladder (InvTCC) each year in the United States. Surgical removal of the bladder (cystectomy) and regional lymph node dissection are considered frontline therapy. Cystectomy causes extensive acute morbidity, and 50% of patients with InvTCC have occult metastases at the time of diagnosis. Better staging procedures for InvTCC are greatly needed. This study was performed to evaluate an intra-operative near infrared fluorescence imaging (NIRF) system (Frangioni laboratory) for identifying sentinel lymph nodes draining InvTCC. NIRF imaging was used to map lymph node drainage from specific quadrants of the urinary bladder in normal dogs and pigs, and to map lymph node drainage from naturally-occurring InvTCC in pet dogs where the disease closely mimics the human condition. Briefly, during surgery NIR fluorophores (human serum albumen-fluorophore complex, or quantum dots) were injected directly into the bladder wall, and fluorescence observed in lymphatics and regional nodes. Conditions studied to optimize the procedure including: type of fluorophore, depth of injection, volume of fluorophore injected, and degree of bladder distention at the time of injection. Optimal imaging occurred with very superficial injection of the fluorophore in the serosal surface of the moderately distended bladder. Considerable variability was noted from dog to dog in the pattern of lymph node drainage. NIR fluorescence was noted in lymph nodes with metastases in dogs with InvTCC. In conclusion, intra-operative NIRF imaging is a promising approach to improve sentinel lymph node mapping in invasive urinary bladder cancer.
Levinson, Kimberly L; Mahdi, Haider; Escobar, Pedro F
2013-01-01
The present study was performed to determine the optimal dosage of indocyanine green (ICG) to accurately differentiate the sentinel node from surrounding tissue and then to test this dosage using novel single-port robotic instrumentation. The study was performed in healthy female pigs. After induction of anesthesia, all pigs underwent exploratory laparotomy, dissection of the bladder, and colpotomy to reveal the cervical os. With use of a 21-gauge needle, 0.5 mL normal saline solution was injected at the 3- and 9-o'clock positions as control. Four concentrations of ICG were constituted for doses of 1000, 500, 250, and 175 μg per 0.5 mL. ICG was then injected at the 3- and 9-o'clock positions on the cervix. The SPY camera was used to track ICG into the sentinel nodes and to quantify the intensity of light emitted. SPY technology uses an intensity scale of 1 to 256; this scale was used to determine the difference in intensity between the sentinel node and surrounding tissues. The optimal dosage was tested using single-port robotic instrumentation with the same injection techniques. A sentinel node was identified at all doses except 175 μg, at which ICG stayed in the cervix and vasculature only. For both the 500- and 250-μg doses, the sentinel node was identified before reaching maximum intensity. At maximum intensity, the difference between the surrounding tissue and the node was 207 (251 vs 44) for the 500-μg dose and 159 (251 vs 92) for the 250-μg dose. Sentinel lymph node (SLN) biopsy was successfully performed using single-port robotic technology with both the 250- and 500-μg doses. For SLN detection, the dose of ICG is related to the ability to differentiate the sentinel node from the surrounding tissue. An ICG dose of 250 to 500 μg enables identification of a SLN with more distinction from the surrounding tissues, and this procedure is feasible using single-port robotics instrumentation. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.
Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks
NASA Astrophysics Data System (ADS)
Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla
2016-12-01
Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.
Kümmerli, Rolf; Keller, Laurent
2009-01-01
Split sex ratio—a pattern where colonies within a population specialize in either male or queen production—is a widespread phenomenon in ants and other social Hymenoptera. It has often been attributed to variation in colony kin structure, which affects the degree of queen–worker conflict over optimal sex allocation. However, recent findings suggest that split sex ratio is a more diverse phenomenon, which can evolve for multiple reasons. Here, we provide an overview of the main conditions favouring split sex ratio. We show that each split sex-ratio type arises due to a different combination of factors determining colony kin structure, queen or worker control over sex ratio and the type of conflict between colony members. PMID:19457886
A predictive index of axillary nodal involvement in operable breast cancer.
De Laurentiis, M.; Gallo, C.; De Placido, S.; Perrone, F.; Pettinato, G.; Petrella, G.; Carlomagno, C.; Panico, L.; Delrio, P.; Bianco, A. R.
1996-01-01
We investigated the association between pathological characteristics of primary breast cancer and degree of axillary nodal involvement and obtained a predictive index of the latter from the former. In 2076 cases, 17 histological features, including primary tumour and local invasion variables, were recorded. The whole sample was randomly split in a training (75% of cases) and a test sample. Simple and multiple correspondence analysis were used to select the variables to enter in a multinomial logit model to build an index predictive of the degree of nodal involvement. The response variable was axillary nodal status coded in four classes (N0, N1-3, N4-9, N > or = 10). The predictive index was then evaluated by testing goodness-of-fit and classification accuracy. Covariates significantly associated with nodal status were tumour size (P < 0.0001), tumour type (P < 0.0001), type of border (P = 0.048), multicentricity (P = 0.003), invasion of lymphatic and blood vessels (P < 0.0001) and nipple invasion (P = 0.006). Goodness-of-fit was validated by high concordance between observed and expected number of cases in each decile of predicted probability in both training and test samples. Classification accuracy analysis showed that true node-positive cases were well recognised (84.5%), but there was no clear distinction among the classes of node-positive cases. However, 10 year survival analysis showed a superimposible prognostic behaviour between predicted and observed nodal classes. Moreover, misclassified node-negative patients (i.e. those who are predicted positive) showed an outcome closer to patients with 1-3 metastatic nodes than to node-negative ones. In conclusion, the index cannot completely substitute for axillary node information, but it is a predictor of prognosis as accurate as nodal involvement and identifies a subgroup of node-negative patients with unfavourable prognosis. PMID:8630286
NASA Astrophysics Data System (ADS)
Zhu, Jun; Zhang, David Wei; Kuo, Chinte; Wang, Qing; Wei, Fang; Zhang, Chenming; Chen, Han; He, Daquan; Hsu, Stephen D.
2017-07-01
As technology node shrinks, aggressive design rules for contact and other back end of line (BEOL) layers continue to drive the need for more effective full chip patterning optimization. Resist top loss is one of the major challenges for 28 nm and below technology nodes, which can lead to post-etch hotspots that are difficult to predict and eventually degrade the process window significantly. To tackle this problem, we used an advanced programmable illuminator (FlexRay) and Tachyon SMO (Source Mask Optimization) platform to make resistaware source optimization possible, and it is proved to greatly improve the imaging contrast, enhance focus and exposure latitude, and minimize resist top loss thus improving the yield.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Optimizing hidden layer node number of BP network to estimate fetal weight
NASA Astrophysics Data System (ADS)
Su, Juan; Zou, Yuanwen; Lin, Jiangli; Wang, Tianfu; Li, Deyu; Xie, Tao
2007-12-01
The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods, monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.
OPEX: Optimized Eccentricity Computation in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, Keith
2011-11-14
Real-world graphs have many properties of interest, but often these properties are expensive to compute. We focus on eccentricity, radius and diameter in this work. These properties are useful measures of the global connectivity patterns in a graph. Unfortunately, computing eccentricity for all nodes is O(n2) for a graph with n nodes. We present OPEX, a novel combination of optimizations which improves computation time of these properties by orders of magnitude in real-world experiments on graphs of many different sizes. We run OPEX on graphs with up to millions of links. OPEX gives either exact results or bounded approximations, unlikemore » its competitors which give probabilistic approximations or sacrifice node-level information (eccentricity) to compute graphlevel information (diameter).« less
Matching pursuit parallel decomposition of seismic data
NASA Astrophysics Data System (ADS)
Li, Chuanhui; Zhang, Fanchang
2017-07-01
In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.
Livermore Big Artificial Neural Network Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Essen, Brian Van; Jacobs, Sam; Kim, Hyojin
2016-07-01
LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.
A new hierarchical method to find community structure in networks
NASA Astrophysics Data System (ADS)
Saoud, Bilal; Moussaoui, Abdelouahab
2018-04-01
Community structure is very important to understand a network which represents a context. Many community detection methods have been proposed like hierarchical methods. In our study, we propose a new hierarchical method for community detection in networks based on genetic algorithm. In this method we use genetic algorithm to split a network into two networks which maximize the modularity. Each new network represents a cluster (community). Then we repeat the splitting process until we get one node at each cluster. We use the modularity function to measure the strength of the community structure found by our method, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our method are highly effective at discovering community structure in both computer-generated and real-world network data.
Assessment of Cognitive Communications Interest Areas for NASA Needs and Benefits
NASA Technical Reports Server (NTRS)
Knoblock, Eric J.; Madanayake, Arjuna
2017-01-01
This effort provides a survey and assessment of various cognitive communications interest areas, including node-to-node link optimization, intelligent routing/networking, and learning algorithms, and is conducted primarily from the perspective of NASA space communications needs and benefits. Areas of consideration include optimization methods, learning algorithms, and candidate implementations/technologies. Assessments of current research efforts are provided with mention of areas for further investment. Other considerations, such as antenna technologies and cognitive radio platforms, are briefly provided as well.
Distributed Optimization of Multi Beam Directional Communication Networks
2017-06-30
kT is the noise figure of the receiver. The path loss from node i to the central station is denoted as fi,C and is similarly defined. We seek to...optimally allocate power among several transmit beams per node in order to maximize the total signal-to- interference noise ratio at the central station...Computing, vol. 15, no. 9, September 2016. [6] X. Quan, Y. Liu, S. Shao, C. Huang, and Y. Tang, “Impacts of Phase Noise on Digital Self-Iinterference
HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler
NASA Technical Reports Server (NTRS)
Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.
2012-01-01
HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.
Optimization of the extent of surgical treatment in patients with stage I in cervical cancer
NASA Astrophysics Data System (ADS)
Chernyshova, A. L.; Kolomiets, L. A.; Sinilkin, I. G.; Chernov, V. I.; Lyapunov, A. Yu.
2016-08-01
The study included 26 patients with FIGO stage Ia1-Ib1 cervical cancer who underwent fertility-sparing surgery (transabdominaltrachelectomy). To visualize sentinel lymph nodes, lymphoscintigraphy with injection of 99mTc-labelled nanocolloid was performed the day before surgery. Intraoperative identification of sentinel lymph nodes using hand-held gamma probe was carried out to determine the radioactive counts over the draining lymph node basin. The sentinel lymph node detection in cervical cancer patients contributes to the accurate clinical assessment of the pelvic lymph node status, precise staging of the disease and tailoring of surgical treatment to individual patient.
Solvable model for chimera states of coupled oscillators.
Abrams, Daniel M; Mirollo, Rennie; Strogatz, Steven H; Wiley, Daniel A
2008-08-22
Networks of identical, symmetrically coupled oscillators can spontaneously split into synchronized and desynchronized subpopulations. Such chimera states were discovered in 2002, but are not well understood theoretically. Here we obtain the first exact results about the stability, dynamics, and bifurcations of chimera states by analyzing a minimal model consisting of two interacting populations of oscillators. Along with a completely synchronous state, the system displays stable chimeras, breathing chimeras, and saddle-node, Hopf, and homoclinic bifurcations of chimeras.
Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.
2006-01-01
In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467
Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zhu, G.; Chen, X.
2011-12-01
We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KNUPP,PATRICK; MITCHELL,SCOTT A.
1999-11-01
In an attempt to automatically produce high-quality all-hex meshes, we investigated a mesh improvement strategy: given an initial poor-quality all-hex mesh, we iteratively changed the element connectivity, adding and deleting elements and nodes, and optimized the node positions. We found a set of hex reconnection primitives. We improved the optimization algorithms so they can untangle a negative-Jacobian mesh, even considering Jacobians on the boundary, and subsequently optimize the condition number of elements in an untangled mesh. However, even after applying both the primitives and optimization we were unable to produce high-quality meshes in certain regions. Our experiences suggest that manymore » boundary configurations of quadrilaterals admit no hexahedral mesh with positive Jacobians, although we have no proof of this.« less
NASA Astrophysics Data System (ADS)
Günay, Mehmet; Hakioğlu, Tuğrul; Hüseyin Sömek, Hasan
2017-03-01
In noncentrosymmetric superconductors (NCSs), the inversion symmetry (IS) is most commonly broken by an antisymmetric spin-orbit coupling (SOC). Removing the spin degeneracy and splitting the Fermi surface (FS) into two branches. A two component condensate is then produced mixing an even singlet and an odd triplet. When the triplet and the singlet strengths are comparable, the pair potential can have rich nodes. The angular line nodes (ALNs) are associated with the point group symmetries of the anisotropic lattice structure and they are widely studied in the literature. When the anisotropy is weak, other types of nodes can be present which then affect differently the low temperature properties. Here, we focus on the weakly anisotropic NCSs and the line nodes which survive in the limit of full isotropy. We study the topology of these radial line nodes (RLNs) and show that it is characterized by the Z2 index similar to the quantum-spin-Hall Insulators. From the thermodynamic perspective, the RLNs cause, even in the topological phases, an exponentially suppressed low temperature behaviour which can be mistaken by nodeless s-wave pairing, thus, providing an explanation to a number of recent experiments with contraversial pairing symmetries. In the rare case when the RLN is on the Fermi surface, the exponential suppression is replaced by a linear temperature dependence. The RLNs are difficult to detect, and for this reason, they may have escaped experimental attention. We demonstrate that Andreev conductance measurements with clean interfaces can efficiently identify the weakly anisotropic (WA) conditions where the RLNs are expected to be found.
An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network
Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian
2015-01-01
Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696
An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.
Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian
2015-01-01
Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.
Exploiting node mobility for energy optimization in wireless sensor networks
NASA Astrophysics Data System (ADS)
El-Moukaddem, Fatme Mohammad
Wireless Sensor Networks (WSNs) have become increasingly available for data-intensive applications such as micro-climate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit the sheer amount of data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies such as batteries or small solar panels. The availability of numerous low-cost robotic units (e.g. Robomote and Khepera) has made it possible to construct sensor networks consisting of mobile sensor nodes. It has been shown that the controlled mobility offered by mobile sensors can be exploited to improve the energy efficiency of a network. In this thesis, we propose schemes that use mobile sensor nodes to reduce the energy consumption of data-intensive WSNs. Our approaches differ from previous work in two main aspects. First, our approaches do not require complex motion planning of mobile nodes, and hence can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless communications into a holistic optimization framework. We consider three problems arising from the limited energy in the sensor nodes. In the first problem, the network consists of mostly static nodes and contains only a few mobile nodes. In the second and third problems, we assume essentially that all nodes in the WSN are mobile. We first study a new problem called max-data mobile relay configuration (MMRC ) that finds the positions of a set of mobile sensors, referred to as relays, that maximize the total amount of data gathered by the network during its lifetime. We show that the MMRC problem is surprisingly complex even for a trivial network topology due to the joint consideration of the energy consumption of both wireless communication and mechanical locomotion. We present optimal MMRC algorithms and practical distributed implementations for several important network topologies and applications. Second, we consider the problem of minimizing the total energy consumption of a network. We design an iterative algorithm that improves a given configuration by relocating nodes to new positions. We show that this algorithm converges to the optimal configuration for the given transmission routes. Moreover, we propose an efficient distributed implementation that does not require explicit synchronization. Finally, we consider the problem of maximizing the lifetime of the network. We propose an approach that exploits the mobility of the nodes to balance the energy consumption throughout the network. We develop efficient algorithms for single and multiple round approaches. For all three problems, we evaluate the efficiency of our algorithms through simulations. Our simulation results based on realistic energy models obtained from existing mobile and static sensor platforms show that our approaches significantly improve the network's performance and outperform existing approaches.
Architectures for Cognitive Systems
2010-02-01
highly modular many- node chip was designed which addressed power efficiency to the maximum extent possible. Each node contains an Asynchronous Field...optimization to perform complex cognitive computing operations. This project focused on the design of the core and integration across a four node chip . A...follow on project will focus on creating a 3 dimensional stack of chips that is enabled by the low power usage. The chip incorporates structures to
[Research on improving spectrum resolution of optimized Wollaston prism array].
Zhang, Peng; Wang, Jian-Rong; Zhang, Guo-Chen; Hou, Wen
2011-11-01
In order to not affect the image quality of interference fringes on the basis of the structure by increasing the structure angle of Wollaston prism to improve spectrum resolution, the authors optimized the structure of Wollaston prism. Calculating the function of the splitting angle and the structure angle, analysis indicated that taking the isosceles triangle prism with the same nature of the second wedge-shaped prism after the Wollaston prism, which makes the o and e light parallel to the optical axis, and alpha=0 degrees, the imaging interference fringes are no longer affected by changes in the splitting angle. Several optimized Wollaston prisms were made as an array to improve the spectral resolution. Experiments used traditional and optimized Wollaston prism array to detect the spectrum of the 980 nm laser. Experimental data showed that using optimized Wollaston prism array gets a clearer contrast of interference fringes, and the spectral data with Fourier transform are more accurate with DSP.
NASA Astrophysics Data System (ADS)
Albaaj, Azhar; Makki, S. Vahab A.; Alabkhat, Qassem; Zahedi, Abdulhamid
2017-07-01
Wireless networks suffer from battery discharging specially in cooperative communications when multiple relays have an important role but they are energy constrained. To overcome this problem, energy harvesting from radio frequency signals is applied to charge the node battery. These intermediate nodes have the ability to harvest energy from the source signal and use the energy harvested to transmit information to the destination. In fact, the node tries to harvest energy and then transmit the data to destination. Division of energy harvesting and data transmission can be done in two algorithms: time-switching-based relaying protocol and power-splitting-based relaying protocol. These two algorithms also can be applied in delay-limited and delay-tolerant transmission systems. The previous works have assumed a single relay for energy harvesting, but in this article, the proposed method is concentrated on improving the outage probability and throughput by using multiple antennas in each relay node instead of using single antenna. According to our simulation results, when using multi-antenna relays, ability of energy harvesting is increased and thus system performance will be improved to great extent. Maximum ratio combining scheme has been used when the destination chooses the best signal of relays and antennas satisfying the required signal-to-noise ratio.
A Distributed Data Acquisition System for the Sensor Network of the TAWARA_RTM Project
NASA Astrophysics Data System (ADS)
Fontana, Cristiano Lino; Donati, Massimiliano; Cester, Davide; Fanucci, Luca; Iovene, Alessandro; Swiderski, Lukasz; Moretto, Sandra; Moszynski, Marek; Olejnik, Anna; Ruiu, Alessio; Stevanato, Luca; Batsch, Tadeusz; Tintori, Carlo; Lunardon, Marcello
This paper describes a distributed Data Acquisition System (DAQ) developed for the TAWARA_RTM project (TAp WAter RAdioactivity Real Time Monitor). The aim is detecting the presence of radioactive contaminants in drinking water; in order to prevent deliberate or accidental threats. Employing a set of detectors, it is possible to detect alpha, beta and gamma radiations, from emitters dissolved in water. The Sensor Network (SN) consists of several heterogeneous nodes controlled by a centralized server. The SN cyber-security is guaranteed in order to protect it from external intrusions and malicious acts. The nodes were installed in different locations, along the water treatment processes, in the waterworks plant supplying the aqueduct of Warsaw, Poland. Embedded computers control the simpler nodes, and are directly connected to the SN. Local-PCs (LPCs) control the more complex nodes that consist signal digitizers acquiring data from several detectors. The DAQ in the LPC is split in several processes communicating with sockets in a local sub-network. Each process is dedicated to a very simple task (e.g. data acquisition, data analysis, hydraulics management) in order to have a flexible and fault-tolerant system. The main SN and the local DAQ networks are separated by data routers to ensure the cyber-security.
Node Redeployment Algorithm Based on Stratified Connected Tree for Underwater Sensor Networks
Liu, Jun; Jiang, Peng; Wu, Feng; Yu, Shanen; Song, Chunyue
2016-01-01
During the underwater sensor networks (UWSNs) operation, node drift with water environment causes network topology changes. Periodic node location examination and adjustment are needed to maintain good network monitoring quality as long as possible. In this paper, a node redeployment algorithm based on stratified connected tree for UWSNs is proposed. At every network adjustment moment, self-examination and adjustment on node locations are performed firstly. If a node is outside the monitored space, it returns to the last location recorded in its memory along straight line. Later, the network topology is stratified into a connected tree that takes the sink node as the root node by broadcasting ready information level by level, which can improve the network connectivity rate. Finally, with synthetically considering network coverage and connectivity rates, and node movement distance, the sink node performs centralized optimization on locations of leaf nodes in the stratified connected tree. Simulation results show that the proposed redeployment algorithm can not only keep the number of nodes in the monitored space as much as possible and maintain good network coverage and connectivity rates during network operation, but also reduce node movement distance during node redeployment and prolong the network lifetime. PMID:28029124
A new approach to enhance the performance of decision tree for classifying gene expression data.
Hassan, Md; Kotagiri, Ramamohanarao
2013-12-20
Gene expression data classification is a challenging task due to the large dimensionality and very small number of samples. Decision tree is one of the popular machine learning approaches to address such classification problems. However, the existing decision tree algorithms use a single gene feature at each node to split the data into its child nodes and hence might suffer from poor performance specially when classifying gene expression dataset. By using a new decision tree algorithm where, each node of the tree consists of more than one gene, we enhance the classification performance of traditional decision tree classifiers. Our method selects suitable genes that are combined using a linear function to form a derived composite feature. To determine the structure of the tree we use the area under the Receiver Operating Characteristics curve (AUC). Experimental analysis demonstrates higher classification accuracy using the new decision tree compared to the other existing decision trees in literature. We experimentally compare the effect of our scheme against other well known decision tree techniques. Experiments show that our algorithm can substantially boost the classification performance of the decision tree.
An Optimal Method for Detecting Internal and External Intrusion in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Aliahmadipour, Laya; Javidi, Mohammad M.
Mobile Ad hoc Network (MANET) is formed by a set of mobile hosts which communicate among themselves through radio waves. The hosts establish infrastructure and cooperate to forward data in a multi-hop fashion without a central administration. Due to their communication type and resources constraint, MANETs are vulnerable to diverse types of attacks and intrusions. In this paper, we proposed a method for prevention internal intruder and detection external intruder by using game theory in mobile ad hoc network. One optimal solution for reducing the resource consumption of detection external intruder is to elect a leader for each cluster to provide intrusion service to other nodes in the its cluster, we call this mode moderate mode. Moderate mode is only suitable when the probability of attack is low. Once the probability of attack is high, victim nodes should launch their own IDS to detect and thwart intrusions and we call robust mode. In this paper leader should not be malicious or selfish node and must detect external intrusion in its cluster with minimum cost. Our proposed method has three steps: the first step building trust relationship between nodes and estimation trust value for each node to prevent internal intrusion. In the second step we propose an optimal method for leader election by using trust value; and in the third step, finding the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In first and third step we apply Bayesian game theory. Our method due to using game theory, trust value and honest leader can effectively improve the network security, performance and reduce resource consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Abdelaziz, Omar; Shrestha, Som S
Oak Ridge National laboratory (ORNL) recently conducted extensive laboratory, drop-in investigations for lower Global Warming Potential (GWP) refrigerants to replace R-22 and R-410A. ORNL studied propane, DR-3, ARM-20B, N-20B and R-444B as lower GWP refrigerant replacement for R-22 in a mini-split room air conditioner (RAC) originally designed for R-22; and, R-32, DR-55, ARM-71A, and L41-2, in a mini-split RAC designed for R-410A. We obtained laboratory testing results with very good energy balance and nominal measurement uncertainty. Drop-in studies are not enough to judge the overall performance of the alternative refrigerants since their thermodynamic and transport properties might favor different heatmore » exchanger configurations, e.g. cross-flow, counter flow, etc. This study compares optimized performances of individual refrigerants using a physics-based system model tools. The DOE/ORNL Heat Pump Design Model (HPDM) was used to model the mini-split RACs by inputting detailed heat exchangers geometries, compressor displacement and efficiencies as well as other relevant system components. The RAC models were calibrated against the lab data for each individual refrigerant. The calibrated models were then used to conduct a design optimization for the cooling performance by varying the compressor displacement to match the required capacity, and changing the number of circuits, refrigerant flow direction, tube diameters, air flow rates in the condenser and evaporator at 100% and 50% cooling capacities. This paper compares the optimized performance results for all alternative refrigerants and highlights best candidates for R-22 and R-410A replacement.« less
An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks
Penumalli, Chakradhar; Palanichamy, Yogesh
2015-01-01
A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
A Multi-Hop Clustering Mechanism for Scalable IoT Networks.
Sung, Yoonyoung; Lee, Sookyoung; Lee, Meejeong
2018-03-23
It is expected that up to 26 billion Internet of Things (IoT) equipped with sensors and wireless communication capabilities will be connected to the Internet by 2020 for various purposes. With a large scale IoT network, having each node connected to the Internet with an individual connection may face serious scalability issues. The scalability problem of the IoT network may be alleviated by grouping the nodes of the IoT network into clusters and having a representative node in each cluster connect to the Internet on behalf of the other nodes in the cluster instead of having a per-node Internet connection and communication. In this paper, we propose a multi-hop clustering mechanism for IoT networks to minimize the number of required Internet connections. Specifically, the objective of proposed mechanism is to select the minimum number of coordinators, which take the role of a representative node for the cluster, i.e., having the Internet connection on behalf of the rest of the nodes in the cluster and to map a partition of the IoT nodes onto the selected set of coordinators to minimize the total distance between the nodes and their respective coordinator under a certain constraint in terms of maximum hop count between the IoT nodes and their respective coordinator. Since this problem can be mapped into a set cover problem which is known as NP-hard, we pursue a heuristic approach to solve the problem and analyze the complexity of the proposed solution. Through a set of experiments with varying parameters, the proposed scheme shows 63-87.3% reduction of the Internet connections depending on the number of the IoT nodes while that of the optimal solution is 65.6-89.9% in a small scale network. Moreover, it is shown that the performance characteristics of the proposed mechanism coincide with expected performance characteristics of the optimal solution in a large-scale network.
A Multi-Hop Clustering Mechanism for Scalable IoT Networks
2018-01-01
It is expected that up to 26 billion Internet of Things (IoT) equipped with sensors and wireless communication capabilities will be connected to the Internet by 2020 for various purposes. With a large scale IoT network, having each node connected to the Internet with an individual connection may face serious scalability issues. The scalability problem of the IoT network may be alleviated by grouping the nodes of the IoT network into clusters and having a representative node in each cluster connect to the Internet on behalf of the other nodes in the cluster instead of having a per-node Internet connection and communication. In this paper, we propose a multi-hop clustering mechanism for IoT networks to minimize the number of required Internet connections. Specifically, the objective of proposed mechanism is to select the minimum number of coordinators, which take the role of a representative node for the cluster, i.e., having the Internet connection on behalf of the rest of the nodes in the cluster and to map a partition of the IoT nodes onto the selected set of coordinators to minimize the total distance between the nodes and their respective coordinator under a certain constraint in terms of maximum hop count between the IoT nodes and their respective coordinator. Since this problem can be mapped into a set cover problem which is known as NP-hard, we pursue a heuristic approach to solve the problem and analyze the complexity of the proposed solution. Through a set of experiments with varying parameters, the proposed scheme shows 63–87.3% reduction of the Internet connections depending on the number of the IoT nodes while that of the optimal solution is 65.6–89.9% in a small scale network. Moreover, it is shown that the performance characteristics of the proposed mechanism coincide with expected performance characteristics of the optimal solution in a large-scale network. PMID:29570691
Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset.
Ha, Richard; Chang, Peter; Karcich, Jenika; Mutasa, Simukayi; Fardanesh, Reza; Wynn, Ralph T; Liu, Michael Z; Jambawalikar, Sachin
2018-04-25
The aim of this study is to evaluate the role of convolutional neural network (CNN) in predicting axillary lymph node metastasis, using a breast MRI dataset. An institutional review board (IRB)-approved retrospective review of our database from 1/2013 to 6/2016 identified 275 axillary lymph nodes for this study. Biopsy-proven 133 metastatic axillary lymph nodes and 142 negative control lymph nodes were identified based on benign biopsies (100) and from healthy MRI screening patients (42) with at least 3 years of negative follow-up. For each breast MRI, axillary lymph node was identified on first T1 post contrast dynamic images and underwent 3D segmentation using an open source software platform 3D Slicer. A 32 × 32 patch was then extracted from the center slice of the segmented tumor data. A CNN was designed for lymph node prediction based on each of these cropped images. The CNN consisted of seven convolutional layers and max-pooling layers with 50% dropout applied in the linear layer. In addition, data augmentation and L2 regularization were performed to limit overfitting. Training was implemented using the Adam optimizer, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Code for this study was written in Python using the TensorFlow module (1.0.0). Experiments and CNN training were done on a Linux workstation with NVIDIA GTX 1070 Pascal GPU. Two class axillary lymph node metastasis prediction models were evaluated. For each lymph node, a final softmax score threshold of 0.5 was used for classification. Based on this, CNN achieved a mean five-fold cross-validation accuracy of 84.3%. It is feasible for current deep CNN architectures to be trained to predict likelihood of axillary lymph node metastasis. Larger dataset will likely improve our prediction model and can potentially be a non-invasive alternative to core needle biopsy and even sentinel lymph node evaluation.
Traffic-engineering-aware shortest-path routing and its application in IP-over-WDM networks [Invited
NASA Astrophysics Data System (ADS)
Lee, Youngseok; Mukherjee, Biswanath
2004-03-01
Single shortest-path routing is known to perform poorly for Internet traffic engineering (TE) where the typical optimization objective is to minimize the maximum link load. Splitting traffic uniformly over equal-cost multiple shortest paths in open shortest path first and intermediate system-intermediate system protocols does not always minimize the maximum link load when multiple paths are not carefully selected for the global traffic demand matrix. However, a TE-aware shortest path among all the equal-cost multiple shortest paths between each ingress-egress pair can be selected such that the maximum link load is significantly reduced. IP routers can use the globally optimal TE-aware shortest path without any change to existing routing protocols and without any serious configuration overhead. While calculating TE-aware shortest paths, the destination-based forwarding constraint at a node should be satisfied, because an IP router will forward a packet to the next hop toward the destination by looking up the destination prefix. We present a mathematical problem formulation for finding a set of TE-aware shortest paths for the given network as an integer linear program, and we propose a simple heuristic for solving large instances of the problem. Then we explore the usage of our proposed algorithm for the integrated TE method in IP-over-WDM networks. The proposed algorithm is evaluated through simulations in IP networks as well as in IP-over-WDM networks.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
2007-09-01
The geometry depicted in Figure 2-1 and defined in (9) governs the relationship between the two coordinate systems. We obtain the three-dimensional...node = ’ Unicorn ’ else if (v_id == 4) node = ’Macrura
Social-aware data dissemination in opportunistic mobile social networks
NASA Astrophysics Data System (ADS)
Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Han, Xiaowei
Opportunistic Mobile Social Networks (OMSNs), formed by mobile users with social relationships and characteristics, enhance spontaneous communication among users that opportunistically encounter each other. Such networks can be exploited to improve the performance of data forwarding. Discovering optimal relay nodes is one of the important issues for efficient data propagation in OMSNs. Although traditional centrality definitions to identify the nodes features in network, they cannot identify effectively the influential nodes for data dissemination in OMSNs. Existing protocols take advantage of spatial contact frequency and social characteristics to enhance transmission performance. However, existing protocols have not fully exploited the benefits of the relations and the effects between geographical information, social features and user interests. In this paper, we first evaluate these three characteristics of users and design a routing protocol called Geo-Social-Interest (GSI) protocol to select optimal relay nodes. We compare the performance of GSI using real INFOCOM06 data sets. The experiment results demonstrate that GSI overperforms the other protocols with highest data delivery ratio and low communication overhead.
Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes
NASA Astrophysics Data System (ADS)
Zhai, Y.; Liu, J.; Liu, L.
2018-04-01
Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.
Analytical network process based optimum cluster head selection in wireless sensor network.
Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.
Analytical network process based optimum cluster head selection in wireless sensor network
Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616
An optimal routing strategy on scale-free networks
NASA Astrophysics Data System (ADS)
Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Qi, Zhaohui; Zhao, Yongbin
Traffic is one of the most fundamental dynamical processes in networked systems. With the traditional shortest path routing (SPR) protocol, traffic congestion is likely to occur on the hub nodes on scale-free networks. In this paper, we propose an improved optimal routing (IOR) strategy which is based on the betweenness centrality and the degree centrality of nodes in the scale-free networks. With the proposed strategy, the routing paths can accurately bypass hub nodes in the network to enhance the transport efficiency. Simulation results show that the traffic capacity as well as some other indexes reflecting transportation efficiency are further improved with the IOR strategy. Owing to the significantly improved traffic performance, this study is helpful to design more efficient routing strategies in communication or transportation systems.
NASA Astrophysics Data System (ADS)
Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen
2018-04-01
At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.
Adaptive Diagrams: Handing Control over to the Learner to Manage Split-Attention Online
ERIC Educational Resources Information Center
Agostinho, Shirley; Tindall-Ford, Sharon; Roodenrys, Kylie
2013-01-01
Based on cognitive load theory, it is well known that when studying a diagram that includes explanatory text, optimal learning occurs when the text is physically positioned close to the diagram as it eliminates the need for learners to split their attention between the two sources of information. What is not known is the effect on learning when…
Lee, Jun Ho; Lee, Hyun Chul; Yi, Ha Woo; Kim, Bong Kyun; Bae, Soo Youn; Lee, Se Kyung; Choe, Jun-Ho; Kim, Jung-Han; Kim, Jee Soo
2016-04-01
The influence of serum thyroglobulin (Tg) and thyroidectomy status on Tg in fine-needle aspiration cytology (FNAC) washout fluid is unclear. A total of 282 lymph nodes were prospectively subjected to FNAC, fine-needle aspiration (FNA)-Tg measurement, and frozen and permanent biopsies. We evaluated the diagnostic performance of several predetermined FNA-Tg cutoff values for recurrence/metastasis in lymph nodes according to thyroidectomy status. The diagnostic performance of FNA-Tg varied according to thyroidectomy status. The optimized cutoff value of FNA-Tg was 2.2 ng/mL. However, among FNAC-negative lymph nodes, the FNA-Tg cutoff value of 0.9 ng/mL showed better diagnostic performance in patients with a thyroid gland. An FNA-Tg/serum-Tg cutoff ratio of 1 showed the best diagnostic performance in patients without a thyroid gland. Applying the optimal cutoff values of FNA-Tg according to thyroid gland status and serum Tg level facilitates the diagnostic evaluation of neck lymph node recurrences/metastases in patients with papillary thyroid carcinoma (PTC). © 2015 Wiley Periodicals, Inc. Head Neck 38: E1705-E1712, 2016. © 2015 Wiley Periodicals, Inc.
Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring
Lazarescu, Mihai T.
2015-01-01
Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery. PMID:25912349
Recent Performance Results of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.
2017-10-01
Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.
An Energy Balanced and Lifetime Extended Routing Protocol for Underwater Sensor Networks.
Wang, Hao; Wang, Shilian; Zhang, Eryang; Lu, Luxi
2018-05-17
Energy limitation is an adverse problem in designing routing protocols for underwater sensor networks (UWSNs). To prolong the network lifetime with limited battery power, an energy balanced and efficient routing protocol, called energy balanced and lifetime extended routing protocol (EBLE), is proposed in this paper. The proposed EBLE not only balances traffic loads according to the residual energy, but also optimizes data transmissions by selecting low-cost paths. Two phases are operated in the EBLE data transmission process: (1) candidate forwarding set selection phase and (2) data transmission phase. In candidate forwarding set selection phase, nodes update candidate forwarding nodes by broadcasting the position and residual energy level information. The cost value of available nodes is calculated and stored in each sensor node. Then in data transmission phase, high residual energy and relatively low-cost paths are selected based on the cost function and residual energy level information. We also introduce detailed analysis of optimal energy consumption in UWSNs. Numerical simulation results on a variety of node distributions and data load distributions prove that EBLE outperforms other routing protocols (BTM, BEAR and direct transmission) in terms of network lifetime and energy efficiency.
Optimization of wireless sensor networks based on chicken swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Wang, Qingxi; Zhu, Lihua
2017-05-01
In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.
Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun
2017-08-20
This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ , where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority.
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH
NASA Astrophysics Data System (ADS)
Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.
2018-01-01
Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.
Design and Manufacture of Structurally Efficient Tapered Struts
NASA Technical Reports Server (NTRS)
Brewster, Jebediah W.
2009-01-01
Composite materials offer the potential of weight savings for numerous spacecraft and aircraft applications. A composite strut is just one integral part of the node-to-node system and the optimization of the shut and node assembly is needed to take full advantage of the benefit of composites materials. Lockheed Martin designed and manufactured a very light weight one piece composite tapered strut that is fully representative of a full scale flight article. In addition, the team designed and built a prototype of the node and end fitting system that will effectively integrate and work with the full scale flight articles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.
Here, we develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme ismore » unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.« less
Line-plane broadcasting in a data communications network of a parallel computer
Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.
2010-06-08
Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.
Line-plane broadcasting in a data communications network of a parallel computer
Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.
2010-11-23
Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.
Yen, Hong-Hsu
2009-01-01
In wireless sensor networks, data aggregation routing could reduce the number of data transmissions so as to achieve energy efficient transmission. However, data aggregation introduces data retransmission that is caused by co-channel interference from neighboring sensor nodes. This kind of co-channel interference could result in extra energy consumption and significant latency from retransmission. This will jeopardize the benefits of data aggregation. One possible solution to circumvent data retransmission caused by co-channel interference is to assign different channels to every sensor node that is within each other's interference range on the data aggregation tree. By associating each radio with a different channel, a sensor node could receive data from all the children nodes on the data aggregation tree simultaneously. This could reduce the latency from the data source nodes back to the sink so as to meet the user's delay QoS. Since the number of radios on each sensor node and the number of non-overlapping channels are all limited resources in wireless sensor networks, a challenging question here is to minimize the total transmission cost under limited number of non-overlapping channels in multi-radio wireless sensor networks. This channel constrained data aggregation routing problem in multi-radio wireless sensor networks is an NP-hard problem. I first model this problem as a mixed integer and linear programming problem where the objective is to minimize the total transmission subject to the data aggregation routing, channel and radio resources constraints. The solution approach is based on the Lagrangean relaxation technique to relax some constraints into the objective function and then to derive a set of independent subproblems. By optimally solving these subproblems, it can not only calculate the lower bound of the original primal problem but also provide useful information to get the primal feasible solutions. By incorporating these Lagrangean multipliers as the link arc weight, the optimization-based heuristics are proposed to get energy-efficient data aggregation tree with better resource (channel and radio) utilization. From the computational experiments, the proposed optimization-based approach is superior to existing heuristics under all tested cases.
Measuring and Evaluating TCP Splitting for Cloud Services
NASA Astrophysics Data System (ADS)
Pathak, Abhinav; Wang, Y. Angela; Huang, Cheng; Greenberg, Albert; Hu, Y. Charlie; Kern, Randy; Li, Jin; Ross, Keith W.
In this paper, we examine the benefits of split-TCP proxies, deployed in an operational world-wide network, for accelerating cloud services. We consider a fraction of a network consisting of a large number of satellite datacenters, which host split-TCP proxies, and a smaller number of mega datacenters, which ultimately perform computation or provide storage. Using web search as an exemplary case study, our detailed measurements reveal that a vanilla TCP splitting solution deployed at the satellite DCs reduces the 95 th percentile of latency by as much as 43% when compared to serving queries directly from the mega DCs. Through careful dissection of the measurement results, we characterize how individual components, including proxy stacks, network protocols, packet losses and network load, can impact the latency. Finally, we shed light on further optimizations that can fully realize the potential of the TCP splitting solution.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
A Game Theoretic Approach for Balancing Energy Consumption in Clustered Wireless Sensor Networks.
Yang, Liu; Lu, Yinzhi; Xiong, Lian; Tao, Yang; Zhong, Yuanchang
2017-11-17
Clustering is an effective topology control method in wireless sensor networks (WSNs), since it can enhance the network lifetime and scalability. To prolong the network lifetime in clustered WSNs, an efficient cluster head (CH) optimization policy is essential to distribute the energy among sensor nodes. Recently, game theory has been introduced to model clustering. Each sensor node is considered as a rational and selfish player which will play a clustering game with an equilibrium strategy. Then it decides whether to act as the CH according to this strategy for a tradeoff between providing required services and energy conservation. However, how to get the equilibrium strategy while maximizing the payoff of sensor nodes has rarely been addressed to date. In this paper, we present a game theoretic approach for balancing energy consumption in clustered WSNs. With our novel payoff function, realistic sensor behaviors can be captured well. The energy heterogeneity of nodes is considered by incorporating a penalty mechanism in the payoff function, so the nodes with more energy will compete for CHs more actively. We have obtained the Nash equilibrium (NE) strategy of the clustering game through convex optimization. Specifically, each sensor node can achieve its own maximal payoff when it makes the decision according to this strategy. Through plenty of simulations, our proposed game theoretic clustering is proved to have a good energy balancing performance and consequently the network lifetime is greatly enhanced.
Guo, Wei-Feng; Zhang, Shao-Wu; Shi, Qian-Qian; Zhang, Cheng-Ming; Zeng, Tao; Chen, Luonan
2018-01-19
The advances in target control of complex networks not only can offer new insights into the general control dynamics of complex systems, but also be useful for the practical application in systems biology, such as discovering new therapeutic targets for disease intervention. In many cases, e.g. drug target identification in biological networks, we usually require a target control on a subset of nodes (i.e., disease-associated genes) with minimum cost, and we further expect that more driver nodes consistent with a certain well-selected network nodes (i.e., prior-known drug-target genes). Therefore, motivated by this fact, we pose and address a new and practical problem called as target control problem with objectives-guided optimization (TCO): how could we control the interested variables (or targets) of a system with the optional driver nodes by minimizing the total quantity of drivers and meantime maximizing the quantity of constrained nodes among those drivers. Here, we design an efficient algorithm (TCOA) to find the optional driver nodes for controlling targets in complex networks. We apply our TCOA to several real-world networks, and the results support that our TCOA can identify more precise driver nodes than the existing control-fucus approaches. Furthermore, we have applied TCOA to two bimolecular expert-curate networks. Source code for our TCOA is freely available from http://sysbio.sibcb.ac.cn/cb/chenlab/software.htm or https://github.com/WilfongGuo/guoweifeng . In the previous theoretical research for the full control, there exists an observation and conclusion that the driver nodes tend to be low-degree nodes. However, for target control the biological networks, we find interestingly that the driver nodes tend to be high-degree nodes, which is more consistent with the biological experimental observations. Furthermore, our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the practical strategic utility of TCOA to incorporate prior drug information into potential drug-target forecasts. Thus applicably, our method paves a novel and efficient way to identify the drug targets for leading the phenotype transitions of underlying biological networks.
Secure Multiuser Communications in Wireless Sensor Networks with TAS and Cooperative Jamming
Yang, Maoqiang; Zhang, Bangning; Huang, Yuzhen; Yang, Nan; Guo, Daoxing; Gao, Bin
2016-01-01
In this paper, we investigate the secure transmission in wireless sensor networks (WSNs) consisting of one multiple-antenna base station (BS), multiple single-antenna legitimate users, one single-antenna eavesdropper and one multiple-antenna cooperative jammer. In an effort to reduce the scheduling complexity and extend the battery lifetime of the sensor nodes, the switch-and-stay combining (SSC) scheduling scheme is exploited over the sensor nodes. Meanwhile, transmit antenna selection (TAS) is employed at the BS and cooperative jamming (CJ) is adopted at the jammer node, aiming at achieving a satisfactory secrecy performance. Moreover, depending on whether the jammer node has the global channel state information (CSI) of both the legitimate channel and the eavesdropper’s channel, it explores a zero-forcing beamforming (ZFB) scheme or a null-space artificial noise (NAN) scheme to confound the eavesdropper while avoiding the interference to the legitimate user. Building on this, we propose two novel hybrid secure transmission schemes, termed TAS-SSC-ZFB and TAS-SSC-NAN, for WSNs. We then derive the exact closed-form expressions for the secrecy outage probability and the effective secrecy throughput of both schemes to characterize the secrecy performance. Using these closed-form expressions, we further determine the optimal switching threshold and obtain the optimal power allocation factor between the BS and jammer node for both schemes to minimize the secrecy outage probability, while the optimal secrecy rate is decided to maximize the effective secrecy throughput for both schemes. Numerical results are provided to verify the theoretical analysis and illustrate the impact of key system parameters on the secrecy performance. PMID:27845753
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal
2015-01-01
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191
Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal
2015-08-13
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.
Optimized retrievals of precipitable water from the VAS 'split window'
NASA Technical Reports Server (NTRS)
Chesters, Dennis; Robinson, Wayne D.; Uccellini, Louis W.
1987-01-01
Precipitable water fields have been retrieved from the VISSR Atmospheric Sounder (VAS) using a radiation transfer model for the differential water vapor absorption between the 11- and 12-micron 'split window' channels. Previous moisture retrievals using only the split window channels provided very good space-time continuity but poor absolute accuracy. This note describes how retrieval errors can be significantly reduced from plus or minus 0.9 to plus or minus 0.6 gm/sq cm by empirically optimizing the effective air temperature and absorption coefficients used in the two-channel model. The differential absorption between the VAS 11- and 12-micron channels, empirically estimated from 135 colocated VAS-RAOB observations, is found to be approximately 50 percent smaller than the theoretical estimates. Similar discrepancies have been noted previously between theoretical and empirical absorption coefficients applied to the retrieval of sea surface temperatures using radiances observed by VAS and polar-orbiting satellites. These discrepancies indicate that radiation transfer models for the 11-micron window appear to be less accurate than the satellite observations.
Broadcasting a message in a parallel computer
Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN
2011-08-02
Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Sequential detection of temporal communities by estrangement confinement.
Kawadia, Vikas; Sreenivasan, Sameet
2012-01-01
Temporal communities are the result of a consistent partitioning of nodes across multiple snapshots of an evolving network, and they provide insights into how dense clusters in a network emerge, combine, split and decay over time. To reliably detect temporal communities we need to not only find a good community partition in a given snapshot but also ensure that it bears some similarity to the partition(s) found in the previous snapshot(s), a particularly difficult task given the extreme sensitivity of community structure yielded by current methods to changes in the network structure. Here, motivated by the inertia of inter-node relationships, we present a new measure of partition distance called estrangement, and show that constraining estrangement enables one to find meaningful temporal communities at various degrees of temporal smoothness in diverse real-world datasets. Estrangement confinement thus provides a principled approach to uncovering temporal communities in evolving networks.
NEAMS-IPL MOOSE Framework Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaughter, Andrew Edward; Permann, Cody James; Kong, Fande
The Multiapp Picard iteration Milestone’s purpose was to support a framework level “tight-coupling” method within the hierarchical Multiapp’s execution scheme. This new solution scheme gives developers new choices for running multiphysics applications, particularly those with very strong nonlinear effects or those requiring coupling across disparate time or spatial scales. Figure 1 shows a typical Multiapp setup in MOOSE. Each node represents a separate simulation containing a separate equation system. MOOSE solves the equation system on each node in turn, in a user-controlled manner. Information can be aggregated or split and transferred from parent to child or child to parent asmore » needed between solves. Performing a tightly coupled execution scheme using this method wasn’t possible in the original implementation. This is was due to the inability to back up to a previous state once a converged solution was accepted at a particular Multiapp level.« less
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.
2016-03-01
The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.
MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm
Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.
2014-01-01
The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339
NASA Astrophysics Data System (ADS)
Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi
2015-01-01
We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
Sentinel Lymph Node Detection Using Carbon Nanoparticles in Patients with Early Breast Cancer
Lu, Jianping; Zeng, Yi; Chen, Xia; Yan, Jun
2015-01-01
Purpose Carbon nanoparticles have a strong affinity for the lymphatic system. The purpose of this study was to evaluate the feasibility of sentinel lymph node biopsy using carbon nanoparticles in early breast cancer and to optimize the application procedure. Methods Firstly, we performed a pilot study to demonstrate the optimized condition using carbon nanoparticles for sentinel lymph nodes (SLNs) detection by investigating 36 clinically node negative breast cancer patients. In subsequent prospective study, 83 patients with clinically node negative breast cancer were included to evaluate SLNs using carbon nanoparticles. Another 83 SLNs were detected by using blue dye. SLNs detection parameters were compared between the methods. All patients irrespective of the SLNs status underwent axillary lymph node dissection for verification of axillary node status after the SLN biopsy. Results In pilot study, a 1 ml carbon nanoparticles suspension used 10–15min before surgery was associated with the best detection rate. In subsequent prospective study, with carbon nanoparticles, the identification rate, accuracy, false negative rate was 100%, 96.4%, 11.1%, respectively. The identification rate and accuracy were 88% and 95.5% with 15.8% of false negative rate using blue dye technique. The use of carbon nanoparticles suspension showed significantly superior results in identification rate (p = 0.001) and reduced false-negative results compared with blue dye technique. Conclusion Our study demonstrated feasibility and accuracy of using carbon nanoparticles for SLNs mapping in breast cancer patients. Carbon nanoparticles are useful in SLNs detection in institutions without access to radioisotope. PMID:26296136
Stability of synchrony against local intermittent fluctuations in tree-like power grids
NASA Astrophysics Data System (ADS)
Auer, Sabine; Hellmann, Frank; Krause, Marie; Kurths, Jürgen
2017-12-01
90% of all Renewable Energy Power in Germany is installed in tree-like distribution grids. Intermittent power fluctuations from such sources introduce new dynamics into the lower grid layers. At the same time, distributed resources will have to contribute to stabilize the grid against these fluctuations in the future. In this paper, we model a system of distributed resources as oscillators on a tree-like, lossy power grid and its ability to withstand desynchronization from localized intermittent renewable infeed. We find a remarkable interplay of the network structure and the position of the node at which the fluctuations are fed in. An important precondition for our findings is the presence of losses in distribution grids. Then, the most network central node splits the network into branches with different influence on network stability. Troublemakers, i.e., nodes at which fluctuations are especially exciting the grid, tend to be downstream branches with high net power outflow. For low coupling strength, we also find branches of nodes vulnerable to fluctuations anywhere in the network. These network regions can be predicted at high confidence using an eigenvector based network measure taking the turbulent nature of perturbations into account. While we focus here on tree-like networks, the observed effects also appear, albeit less pronounced, for weakly meshed grids. On the other hand, the observed effects disappear for lossless power grids often studied in the complex system literature.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Niki, Yuichiro; Ogawa, Mikako; Makiura, Rie; Magata, Yasuhiro; Kojima, Chie
2015-11-01
The detection of the sentinel lymph node (SLN), the first lymph node draining tumor cells, is important in cancer diagnosis and therapy. Dendrimers are synthetic macromolecules with highly controllable structures, and are potent multifunctional imaging agents. In this study, 12 types of dendrimer of different generations (G2, G4, G6, and G8) and different terminal groups (amino, carboxyl, and acetyl) were prepared to determine the optimal dendrimer structure for SLN imaging. Radiolabeled dendrimers were intradermally administrated to the right footpads of rats. All G2 dendrimers were predominantly accumulated in the kidney. Amino-terminal, acetyl-terminal, and carboxyl-terminal dendrimers of greater than G4 were mostly located at the injection site, in the blood, and in the SLN, respectively. The carboxyl-terminal dendrimers were largely unrecognized by macrophages and T-cells in the SLN. Finally, SLN detection was successfully performed by single photon emission computed tomography imaging using carboxyl-terminal dendrimers of greater than G4. The early detection of tumor cells in the sentinel draining lymph nodes (SLN) is of utmost importance in terms of determining cancer prognosis and devising treatment. In this article, the authors investigated various formulations of dendrimers to determine the optimal one for tumor detection. The data generated from this study would help clinicians to fight the cancer battle in the near future. Copyright © 2015 Elsevier Inc. All rights reserved.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
Loss surface of XOR artificial neural networks
NASA Astrophysics Data System (ADS)
Mehta, Dhagash; Zhao, Xiaojun; Bernal, Edgar A.; Wales, David J.
2018-05-01
Training an artificial neural network involves an optimization process over the landscape defined by the cost (loss) as a function of the network parameters. We explore these landscapes using optimization tools developed for potential energy landscapes in molecular science. The number of local minima and transition states (saddle points of index one), as well as the ratio of transition states to minima, grow rapidly with the number of nodes in the network. There is also a strong dependence on the regularization parameter, with the landscape becoming more convex (fewer minima) as the regularization term increases. We demonstrate that in our formulation, stationary points for networks with Nh hidden nodes, including the minimal network required to fit the XOR data, are also stationary points for networks with Nh+1 hidden nodes when all the weights involving the additional node are zero. Hence, smaller networks trained on XOR data are embedded in the landscapes of larger networks. Our results clarify certain aspects of the classification and sensitivity (to perturbations in the input data) of minima and saddle points for this system, and may provide insight into dropout and network compression.
Yang, Jing; Xu, Mai; Zhao, Wei; Xu, Baoguo
2010-01-01
For monitoring burst events in a kind of reactive wireless sensor networks (WSNs), a multipath routing protocol (MRP) based on dynamic clustering and ant colony optimization (ACO) is proposed. Such an approach can maximize the network lifetime and reduce the energy consumption. An important attribute of WSNs is their limited power supply, and therefore some metrics (such as energy consumption of communication among nodes, residual energy, path length) were considered as very important criteria while designing routing in the MRP. Firstly, a cluster head (CH) is selected among nodes located in the event area according to some parameters, such as residual energy. Secondly, an improved ACO algorithm is applied in the search for multiple paths between the CH and sink node. Finally, the CH dynamically chooses a route to transmit data with a probability that depends on many path metrics, such as energy consumption. The simulation results show that MRP can prolong the network lifetime, as well as balance of energy consumption among nodes and reduce the average energy consumption effectively.
A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks
NASA Astrophysics Data System (ADS)
Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.
2008-08-01
Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.
Campbell, Carlene E-A; Khan, Shafiullah; Singh, Dhananjay; Loo, Kok-Keong
2011-01-01
The next generation surveillance and multimedia systems will become increasingly deployed as wireless sensor networks in order to monitor parks, public places and for business usage. The convergence of data and telecommunication over IP-based networks has paved the way for wireless networks. Functions are becoming more intertwined by the compelling force of innovation and technology. For example, many closed-circuit TV premises surveillance systems now rely on transmitting their images and data over IP networks instead of standalone video circuits. These systems will increase their reliability in the future on wireless networks and on IEEE 802.11 networks. However, due to limited non-overlapping channels, delay, and congestion there will be problems at sink nodes. In this paper we provide necessary conditions to verify the feasibility of round robin technique in these networks at the sink nodes by using a technique to regulate multi-radio multichannel assignment. We demonstrate through simulations that dynamic channel assignment scheme using multi-radio, and multichannel configuration at a single sink node can perform close to optimal on the average while multiple sink node assignment also performs well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives.
Multi-Channel Multi-Radio Using 802.11 Based Media Access for Sink Nodes in Wireless Sensor Networks
Campbell, Carlene E.-A.; Khan, Shafiullah; Singh, Dhananjay; Loo, Kok-Keong
2011-01-01
The next generation surveillance and multimedia systems will become increasingly deployed as wireless sensor networks in order to monitor parks, public places and for business usage. The convergence of data and telecommunication over IP-based networks has paved the way for wireless networks. Functions are becoming more intertwined by the compelling force of innovation and technology. For example, many closed-circuit TV premises surveillance systems now rely on transmitting their images and data over IP networks instead of standalone video circuits. These systems will increase their reliability in the future on wireless networks and on IEEE 802.11 networks. However, due to limited non-overlapping channels, delay, and congestion there will be problems at sink nodes. In this paper we provide necessary conditions to verify the feasibility of round robin technique in these networks at the sink nodes by using a technique to regulate multi-radio multichannel assignment. We demonstrate through simulations that dynamic channel assignment scheme using multi-radio, and multichannel configuration at a single sink node can perform close to optimal on the average while multiple sink node assignment also performs well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives. PMID:22163883
Optimal cube-connected cube multiprocessors
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Wu, Jie
1993-01-01
Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.
Liu, Limei; Sanchez-Lopez, Hector; Poole, Michael; Liu, Feng; Crozier, Stuart
2012-09-01
Splitting a magnetic resonance imaging (MRI) magnet into two halves can provide a central region to accommodate other modalities, such as positron emission tomography (PET). This approach, however, produces challenges in the design of the gradient coils in terms of gradient performance and fabrication. In this paper, the impact of a central gap in a split MRI system was theoretically studied by analysing the performance of split, actively-shielded transverse gradient coils. In addition, the effects of the eddy currents induced in the cryostat on power loss, mechanical vibration and magnetic field harmonics were also investigated. It was found, as expected, that the gradient performance tended to decrease as the central gap increased. Furthermore, the effects of the eddy currents were heightened as a consequence of splitting the gradient assembly into two halves. An optimal central gap size was found, such that the split gradient coils designed with this central gap size could produce an engineering solution with an acceptable trade-off between gradient performance and eddy current effects. These investigations provide useful information on the inherent trade-offs in hybrid MRI imaging systems. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap
2018-04-01
In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.
Radical lymph node dissection and assessment: Impact on gallbladder cancer prognosis
Liu, Gui-Jie; Li, Xue-Hua; Chen, Yan-Xin; Sun, Hui-Dong; Zhao, Gui-Mei; Hu, San-Yuan
2013-01-01
AIM: To investigate the lymph node metastasis patterns of gallbladder cancer (GBC) and evaluate the optimal categorization of nodal status as a critical prognostic factor. METHODS: From May 1995 to December 2010, a total of 78 consecutive patients with GBC underwent a radical resection at Liaocheng People’s Hospital. A radical resection was defined as removing both the primary tumor and the regional lymph nodes of the gallbladder. Demographic, operative and pathologic data were recorded. The lymph nodes retrieved were examined histologically for metastases routinely from each node. The positive lymph node count (PLNC) as well as the total lymph node count (TLNC) was recorded for each patient. Then the metastatic to examined lymph nodes ratio (LNR) was calculated. Disease-specific survival (DSS) and predictors of outcome were analyzed. RESULTS: With a median follow-up time of 26.50 mo (range, 2-132 mo), median DSS was 29.00 ± 3.92 mo (5-year survival rate, 20.51%). Nodal disease was found in 37 patients (47.44%). DSS of node-negative patients was significantly better than that of node-positive patients (median DSS, 40 mo vs 17 mo, χ2 = 14.814, P < 0.001), while there was no significant difference between N1 patients and N2 patients (median DSS, 18 mo vs 13 mo, χ2 = 0.741, P = 0.389). Optimal TLNC was determined to be four. When node-negative patients were divided according to TLNC, there was no difference in DSS between TLNC < 4 subgroup and TLNC ≥ 4 subgroup (median DSS, 37 mo vs 54 mo, χ2 = 0.715, P = 0.398). For node-positive patients, DSS of TLNC < 4 subgroup was worse than that of TLNC ≥ 4 subgroup (median DSS, 13 mo vs 21 mo, χ2 = 11.035, P < 0.001). Moreover, for node-positive patients, a new cut-off value of six nodes was identified for the number of TLNC that clearly stratified them into 2 separate survival groups (< 6 or ≥ 6, respectively; median DSS, 15 mo vs 33 mo, χ2 = 11.820, P < 0.001). DSS progressively worsened with increasing PLNC and LNR, but no definite cut-off value could be identified. Multivariate analysis revealed histological grade, tumor node metastasis staging, TNLC and LNR to be independent predictors of DSS. Neither location of positive lymph nodes nor PNLC were identified as an independent variable by multivariate analysis. CONCLUSION: Both TLNC and LNR are strong predictors of outcome after curative resection for GBC. The retrieval and examination of at least 6 nodes can influence staging quality and DSS, especially in node-positive patients. PMID:23964151
Radical lymph node dissection and assessment: Impact on gallbladder cancer prognosis.
Liu, Gui-Jie; Li, Xue-Hua; Chen, Yan-Xin; Sun, Hui-Dong; Zhao, Gui-Mei; Hu, San-Yuan
2013-08-21
To investigate the lymph node metastasis patterns of gallbladder cancer (GBC) and evaluate the optimal categorization of nodal status as a critical prognostic factor. From May 1995 to December 2010, a total of 78 consecutive patients with GBC underwent a radical resection at Liaocheng People's Hospital. A radical resection was defined as removing both the primary tumor and the regional lymph nodes of the gallbladder. Demographic, operative and pathologic data were recorded. The lymph nodes retrieved were examined histologically for metastases routinely from each node. The positive lymph node count (PLNC) as well as the total lymph node count (TLNC) was recorded for each patient. Then the metastatic to examined lymph nodes ratio (LNR) was calculated. Disease-specific survival (DSS) and predictors of outcome were analyzed. With a median follow-up time of 26.50 mo (range, 2-132 mo), median DSS was 29.00 ± 3.92 mo (5-year survival rate, 20.51%). Nodal disease was found in 37 patients (47.44%). DSS of node-negative patients was significantly better than that of node-positive patients (median DSS, 40 mo vs 17 mo, χ² = 14.814, P < 0.001), while there was no significant difference between N1 patients and N2 patients (median DSS, 18 mo vs 13 mo, χ² = 0.741, P = 0.389). Optimal TLNC was determined to be four. When node-negative patients were divided according to TLNC, there was no difference in DSS between TLNC < 4 subgroup and TLNC ≥ 4 subgroup (median DSS, 37 mo vs 54 mo, χ² = 0.715, P = 0.398). For node-positive patients, DSS of TLNC < 4 subgroup was worse than that of TLNC ≥ 4 subgroup (median DSS, 13 mo vs 21 mo, χ² = 11.035, P < 0.001). Moreover, for node-positive patients, a new cut-off value of six nodes was identified for the number of TLNC that clearly stratified them into 2 separate survival groups (< 6 or ≥ 6, respectively; median DSS, 15 mo vs 33 mo, χ² = 11.820, P < 0.001). DSS progressively worsened with increasing PLNC and LNR, but no definite cut-off value could be identified. Multivariate analysis revealed histological grade, tumor node metastasis staging, TNLC and LNR to be independent predictors of DSS. Neither location of positive lymph nodes nor PNLC were identified as an independent variable by multivariate analysis. Both TLNC and LNR are strong predictors of outcome after curative resection for GBC. The retrieval and examination of at least 6 nodes can influence staging quality and DSS, especially in node-positive patients.
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
NASA Astrophysics Data System (ADS)
Purnomo, A.; Widyawan; Najib, W.; Hartono, R.; Hartatik
2018-03-01
Mobile adhoc network (MANET) consists of nodes that are independent. A node can communicate each other without the presence of network infrastructure. A node can act as a transmitter and receiver as well as a router. This research has been variation in active route timeout and my route timeout on the performance of AODV-ETX protocol in MANET. The AODV-ETX protocol is the AODV protocol that uses the ETX metric. Performance testing is done on the static node topology with 5 m x 5 m node grid model where the distance between nodes is 100 m and node topology that consists of 25 nodes moves randomly with a moving speed of 1.38 m/s in an area of 1500 m x 300 m. From the test result, on the static node, AODV protocol-ETX shows optimal performance at a value MRT and ART of 10 s and 15 s, but showed a stable performance in the value of MRT and ART ≥60 s, while in randomly moved node topology shows stable performance in the value of MRT and ART ≥80 s.
On advanced configuration enhance adaptive system optimization
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei
2017-10-01
For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.
Xenopus in Space and Time: Fossils, Node Calibrations, Tip-Dating, and Paleobiogeography.
Cannatella, David
2015-01-01
Published data from DNA sequences, morphology of 11 extant and 15 extinct frog taxa, and stratigraphic ranges of fossils were integrated to open a window into the deep-time evolution of Xenopus. The ages and morphological characters of fossils were used as independent datasets to calibrate a chronogram. We found that DNA sequences, either alone or in combination with morphological data and fossils, tended to support a close relationship between Xenopus and Hymenochirus, although in some analyses this topology was not significantly better than the Pipa + Hymenochirus topology. Analyses that excluded DNA data found strong support for the Pipa + Hymenochirus tree. The criterion for selecting the maximum age of the calibration prior influenced the age estimates, and our age estimates of early divergences in the tree of frogs are substantially younger than those of published studies. Node-dating and tip-dating calibrations, either alone or in combination, yielded older dates for nodes than did a root calibration alone. Our estimates of divergence times indicate that overwater dispersal, rather than vicariance due to the splitting of Africa and South America, may explain the presence of Xenopus in Africa and its closest fossil relatives in South America.
LinkMind: link optimization in swarming mobile sensor networks.
Ngo, Trung Dung
2011-01-01
A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation.
LinkMind: Link Optimization in Swarming Mobile Sensor Networks
Ngo, Trung Dung
2011-01-01
A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation. PMID:22164070
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting
2016-01-01
Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes’ placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper. PMID:26828500
Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun
2017-01-01
This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ, where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority. PMID:28825648
Phylogeny of sipunculan worms: A combined analysis of four gene regions and morphology.
Schulze, Anja; Cutler, Edward B; Giribet, Gonzalo
2007-01-01
The intra-phyletic relationships of sipunculan worms were analyzed based on DNA sequence data from four gene regions and 58 morphological characters. Initially we analyzed the data under direct optimization using parsimony as optimality criterion. An implied alignment resulting from the direct optimization analysis was subsequently utilized to perform a Bayesian analysis with mixed models for the different data partitions. For this we applied a doublet model for the stem regions of the 18S rRNA. Both analyses support monophyly of Sipuncula and most of the same clades within the phylum. The analyses differ with respect to the relationships among the major groups but whereas the deep nodes in the direct optimization analysis generally show low jackknife support, they are supported by 100% posterior probability in the Bayesian analysis. Direct optimization has been useful for handling sequences of unequal length and generating conservative phylogenetic hypotheses whereas the Bayesian analysis under mixed models provided high resolution in the basal nodes of the tree.
A Game Theoretic Approach for Balancing Energy Consumption in Clustered Wireless Sensor Networks
Lu, Yinzhi; Xiong, Lian; Tao, Yang; Zhong, Yuanchang
2017-01-01
Clustering is an effective topology control method in wireless sensor networks (WSNs), since it can enhance the network lifetime and scalability. To prolong the network lifetime in clustered WSNs, an efficient cluster head (CH) optimization policy is essential to distribute the energy among sensor nodes. Recently, game theory has been introduced to model clustering. Each sensor node is considered as a rational and selfish player which will play a clustering game with an equilibrium strategy. Then it decides whether to act as the CH according to this strategy for a tradeoff between providing required services and energy conservation. However, how to get the equilibrium strategy while maximizing the payoff of sensor nodes has rarely been addressed to date. In this paper, we present a game theoretic approach for balancing energy consumption in clustered WSNs. With our novel payoff function, realistic sensor behaviors can be captured well. The energy heterogeneity of nodes is considered by incorporating a penalty mechanism in the payoff function, so the nodes with more energy will compete for CHs more actively. We have obtained the Nash equilibrium (NE) strategy of the clustering game through convex optimization. Specifically, each sensor node can achieve its own maximal payoff when it makes the decision according to this strategy. Through plenty of simulations, our proposed game theoretic clustering is proved to have a good energy balancing performance and consequently the network lifetime is greatly enhanced. PMID:29149075
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Numerical simulation of single bubble dynamics under acoustic travelling waves.
Ma, Xiaojian; Huang, Biao; Li, Yikai; Chang, Qing; Qiu, Sicong; Su, Zheng; Fu, Xiaoying; Wang, Guoyu
2018-04-01
The objective of this paper is to apply CLSVOF method to investigate the single bubble dynamics in acoustic travelling waves. The Naiver-Stokes equation considering the acoustic radiation force is proposed and validated to capture the bubble behaviors. And the CLSVOF method, which can capture the continuous geometric properties and satisfies mass conservation, is applied in present work. Firstly, the regime map, depending on the dimensionless acoustic pressure amplitude and acoustic wave number, is constructed to present different bubble behaviors. Then, the time evolution of the bubble oscillation is investigated and analyzed. Finally, the effect of the direction and the damping coefficient of acoustic wave propagation on the bubble behavior are also considered. The numerical results show that the bubble presents distinct oscillation types in acoustic travelling waves, namely, volume oscillation, shape oscillation, and splitting oscillation. For the splitting oscillation, the formation of jet, splitting of bubble, and the rebound of sub-bubbles may lead to substantial increase in pressure fluctuations on the boundary. For the shape oscillation, the nodes and antinodes of the acoustic pressure wave contribute to the formation of the "cross shape" of the bubble. It should be noted that the direction of the bubble translation and bubble jet are always towards the direction of wave propagation. In addition, the damping coefficient causes bubble in shape oscillation to be of asymmetry in shape and inequality in size, and delays the splitting process. Copyright © 2017 Elsevier B.V. All rights reserved.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Cross-layer cluster-based energy-efficient protocol for wireless sensor networks.
Mammu, Aboobeker Sidhik Koyamparambil; Hernandez-Jayo, Unai; Sainz, Nekane; de la Iglesia, Idoia
2015-04-09
Recent developments in electronics and wireless communications have enabled the improvement of low-power and low-cost wireless sensors networks (WSNs). One of the most important challenges in WSNs is to increase the network lifetime due to the limited energy capacity of the network nodes. Another major challenge in WSNs is the hot spots that emerge as locations under heavy traffic load. Nodes in such areas quickly drain energy resources, leading to disconnection in network services. In such an environment, cross-layer cluster-based energy-efficient algorithms (CCBE) can prolong the network lifetime and energy efficiency. CCBE is based on clustering the nodes to different hexagonal structures. A hexagonal cluster consists of cluster members (CMs) and a cluster head (CH). The CHs are selected from the CMs based on nodes near the optimal CH distance and the residual energy of the nodes. Additionally, the optimal CH distance that links to optimal energy consumption is derived. To balance the energy consumption and the traffic load in the network, the CHs are rotated among all CMs. In WSNs, energy is mostly consumed during transmission and reception. Transmission collisions can further decrease the energy efficiency. These collisions can be avoided by using a contention-free protocol during the transmission period. Additionally, the CH allocates slots to the CMs based on their residual energy to increase sleep time. Furthermore, the energy consumption of CH can be further reduced by data aggregation. In this paper, we propose a data aggregation level based on the residual energy of CH and a cost-aware decision scheme for the fusion of data. Performance results show that the CCBE scheme performs better in terms of network lifetime, energy consumption and throughput compared to low-energy adaptive clustering hierarchy (LEACH) and hybrid energy-efficient distributed clustering (HEED).
Algorithms for Mathematical Programming with Emphasis on Bi-level Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldfarb, Donald; Iyengar, Garud
2014-05-22
The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.
NASA Astrophysics Data System (ADS)
Esparza, Javier
In many areas of computer science entities can “reproduce”, “replicate”, or “create new instances”. Paramount examples are threads in multithreaded programs, processes in operating systems, and computer viruses, but many others exist: procedure calls create new incarnations of the callees, web crawlers discover new pages to be explored (and so “create” new tasks), divide-and-conquer procedures split a problem into subproblems, and leaves of tree-based data structures become internal nodes with children. For lack of a better name, I use the generic term systems with process creation to refer to all these entities.
Multicasting for all-optical multifiber networks
NASA Astrophysics Data System (ADS)
Kã¶Ksal, Fatih; Ersoy, Cem
2007-02-01
All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.
Coupling effect of nodes popularity and similarity on social network persistence.
Jin, Xiaogang; Jin, Cheng; Huang, Jiaxuan; Min, Yong
2017-02-21
Network robustness represents the ability of networks to withstand failures and perturbations. In social networks, maintenance of individual activities, also called persistence, is significant towards understanding robustness. Previous works usually consider persistence on pre-generated network structures; while in social networks, the network structure is growing with the cascading inactivity of existed individuals. Here, we address this challenge through analysis for nodes under a coevolution model, which characterizes individual activity changes under three network growth modes: following the descending order of nodes' popularity, similarity or uniform random. We show that when nodes possess high spontaneous activities, a popularity-first growth mode obtains highly persistent networks; otherwise, with low spontaneous activities, a similarity-first mode does better. Moreover, a compound growth mode, with the consecutive joining of similar nodes in a short period and mixing a few high popularity nodes, obtains the highest persistence. Therefore, nodes similarity is essential for persistent social networks, while properly coupling popularity with similarity further optimizes the persistence. This demonstrates the evolution of nodes activity not only depends on network topology, but also their connective typology.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2007-01-01
Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.
Directional Migration of Recirculating Lymphocytes through Lymph Nodes via Random Walks
Thomas, Niclas; Matejovicova, Lenka; Srikusalanukul, Wichat; Shawe-Taylor, John; Chain, Benny
2012-01-01
Naive T lymphocytes exhibit extensive antigen-independent recirculation between blood and lymph nodes, where they may encounter dendritic cells carrying cognate antigen. We examine how long different T cells may spend in an individual lymph node by examining data from long term cannulation of blood and efferent lymphatics of a single lymph node in the sheep. We determine empirically the distribution of transit times of migrating T cells by applying the Least Absolute Shrinkage & Selection Operator () or regularised to fit experimental data describing the proportion of labelled infused cells in blood and efferent lymphatics over time. The optimal inferred solution reveals a distribution with high variance and strong skew. The mode transit time is typically between 10 and 20 hours, but a significant number of cells spend more than 70 hours before exiting. We complement the empirical machine learning based approach by modelling lymphocyte passage through the lymph node . On the basis of previous two photon analysis of lymphocyte movement, we optimised distributions which describe the transit times (first passage times) of discrete one dimensional and continuous (Brownian) three dimensional random walks with drift. The optimal fit is obtained when drift is small, i.e. the ratio of probabilities of migrating forward and backward within the node is close to one. These distributions are qualitatively similar to the inferred empirical distribution, with high variance and strong skew. In contrast, an optimised normal distribution of transit times (symmetrical around mean) fitted the data poorly. The results demonstrate that the rapid recirculation of lymphocytes observed at a macro level is compatible with predominantly randomised movement within lymph nodes, and significant probabilities of long transit times. We discuss how this pattern of migration may contribute to facilitating interactions between low frequency T cells and antigen presenting cells carrying cognate antigen. PMID:23028891
Wireless visual sensor network resource allocation using cross-layer optimization
NASA Astrophysics Data System (ADS)
Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.
2009-01-01
In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.
Collective network for computer structures
Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M
2014-01-07
A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.
Collective network for computer structures
Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY
2011-08-16
A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.
Regularization iteration imaging algorithm for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao
2018-03-01
The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Mark H., E-mail: markp@u.washington.ed; Smith, Wade P.; Parvathaneni, Upendra
2011-03-15
Purpose: To determine under what conditions positron emission tomography (PET) imaging will be useful in decisions regarding the use of radiotherapy for the treatment of clinically occult lymph node metastases in head-and-neck cancer. Methods and Materials: A decision model of PET imaging and its downstream effects on radiotherapy outcomes was constructed using an influence diagram. This model included the sensitivity and specificity of PET, as well as the type and stage of the primary tumor. These parameters were varied to determine the optimal strategy for imaging and therapy for different clinical situations. Maximum expected utility was the metric by whichmore » different actions were ranked. Results: For primary tumors with a low probability of lymph node metastases, the sensitivity of PET should be maximized, and 50 Gy should be delivered if PET is positive and 0 Gy if negative. As the probability for lymph node metastases increases, PET imaging becomes unnecessary in some situations, and the optimal dose to the lymph nodes increases. The model needed to include the causes of certain health states to predict current clinical practice. Conclusion: The model demonstrated the ability to reproduce expected outcomes for a range of tumors and provided recommendations for different clinical situations. The differences between the optimal policies and current clinical practice are likely due to a disparity between stated clinical decision processes and actual decision making by clinicians.« less
Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui
2017-01-01
A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.
Power Allocation and Outage Probability Analysis for SDN-based Radio Access Networks
NASA Astrophysics Data System (ADS)
Zhao, Yongxu; Chen, Yueyun; Mai, Zhiyuan
2018-01-01
In this paper, performance of Access network Architecture based SDN (Software Defined Network) is analyzed with respect to the power allocation issue. A power allocation scheme PSO-PA (Particle Swarm Optimization-power allocation) algorithm is proposed, the proposed scheme is subjected to constant total power with the objective of minimizing system outage probability. The entire access network resource configuration is controlled by the SDN controller, then it sends the optimized power distribution factor to the base station source node (SN) and the relay node (RN). Simulation results show that the proposed scheme reduces the system outage probability at a low complexity.
A novel load balanced energy conservation approach in WSN using biogeography based optimization
NASA Astrophysics Data System (ADS)
Kaushik, Ajay; Indu, S.; Gupta, Daya
2017-09-01
Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krueger, Jens; Micikevicius, Paulius; Williams, Samuel
Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization formore » TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.« less
NASA Astrophysics Data System (ADS)
Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit
2017-03-01
Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2012-01-01
Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.
Immersion and dry scanner extensions for sub-10nm production nodes
NASA Astrophysics Data System (ADS)
Weichselbaum, Stefan; Bornebroek, Frank; de Kort, Toine; Droste, Richard; de Graaf, Roelof F.; van Ballegoij, Rob; Botter, Herman; McLaren, Matthew G.; de Boeij, Wim P.
2015-03-01
Progressing towards the 10nm and 7nm imaging node, pattern-placement and layer-to-layer overlay requirements keep on scaling down and drives system improvements in immersion (ArFi) and dry (ArF/KrF) scanners. A series of module enhancements in the NXT platform have been introduced; among others, the scanner is equipped with exposure stages with better dynamics and thermal control. Grid accuracy improvements with respect to calibration, setup, stability, and layout dependency tighten MMO performance and enable mix and match scanner operation. The same platform improvements also benefit focus control. Improvements in detectability and reproducibility of low contrast alignment marks enhance the alignment solution window for 10nm logic processes and beyond. The system's architecture allows dynamic use of high-order scanner optimization based on advanced actuators of projection lens and scanning stages. This enables a holistic optimization approach for the scanner, the mask, and the patterning process. Productivity scanner design modifications esp. stage speeds and optimization in metrology schemes provide lower layer costs for customers using immersion lithography as well as conventional dry technology. Imaging, overlay, focus, and productivity data is presented, that demonstrates 10nm and 7nm node litho-capability for both (immersion & dry) platforms.
Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue
2014-01-01
Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944
A Recessive Pollination Control System for Wheat Based on Intein-Mediated Protein Splicing.
Gils, Mario
2017-01-01
A transgene-expression system for wheat that relies on the complementation of inactive precursor protein fragments through a split-intein system is described. The N- and C-terminal fragments of a barnase gene from Bacillus amyloliquifaciens were fused to intein sequences from Synechocystis sp. and transformed into wheat plants. Upon translation, both barnase fragments are assembled by an autocatalytic intein-mediated trans-splicing reaction, thus forming a cytotoxic enzyme. This chapter focuses on the use of introns and flexible polypeptide linkers to foster the expression of a split-barnase expression system in plants. The methods and protocols that were employed with the objective to test the effects of such genetic elements on transgene expression and to find the optimal design of expression vectors for use in wheat are provided. Split-inteins can be used to form an agriculturally important trait (male sterility) in wheat plants. The use of this principle for the production of hybrid wheat seed is described. The suggested toolbox will hopefully be a valuable contribution to future optimization strategies in this commercially important crop.
NASA Astrophysics Data System (ADS)
Jara, Daniel; de Dreuzy, Jean-Raynald; Cochepin, Benoit
2017-12-01
Reactive transport modeling contributes to understand geophysical and geochemical processes in subsurface environments. Operator splitting methods have been proposed as non-intrusive coupling techniques that optimize the use of existing chemistry and transport codes. In this spirit, we propose a coupler relying on external geochemical and transport codes with appropriate operator segmentation that enables possible developments of additional splitting methods. We provide an object-oriented implementation in TReacLab developed in the MATLAB environment in a free open source frame with an accessible repository. TReacLab contains classical coupling methods, template interfaces and calling functions for two classical transport and reactive software (PHREEQC and COMSOL). It is tested on four classical benchmarks with homogeneous and heterogeneous reactions at equilibrium or kinetically-controlled. We show that full decoupling to the implementation level has a cost in terms of accuracy compared to more integrated and optimized codes. Use of non-intrusive implementations like TReacLab are still justified for coupling independent transport and chemical software at a minimal development effort but should be systematically and carefully assessed.
Sabbagh, Charles; Mauvais, François; Cosse, Cyril; Rebibo, Lionel; Joly, Jean-Paul; Dromer, Didier; Aubert, Christine; Carton, Sophie; Dron, Bernard; Dadamessi, Innocenti; Maes, Bernard; Perrier, Guillaume; Manaouil, David; Fontaine, Jean-François; Gozy, Michel; Panis, Xavier; Foncelle, Pierre Henri; de Fresnoy, Hugues; Leroux, Fabien; Vaneslander, Pierre; Ghighi, Caroline; Regimbeau, Jean-Marc
2014-01-01
Lymph node ratio (LNR) (positive lymph nodes/sampled lymph nodes) is predictive of survival in colon cancer. The aim of the present study was to validate the LNR as a prognostic factor and to determine the optimum LNR cutoff for distinguishing between “good prognosis” and “poor prognosis” colon cancer patients. From January 2003 to December 2007, patients with TNM stage III colon cancer operated on with at least of 3 years of follow-up and not lost to follow-up were included in this retrospective study. The two primary endpoints were 3-year overall survival (OS) and disease-free survival (DFS) as a function of the LNR groups and the cutoff. One hundred seventy-eight patients were included. There was no correlation between the LNR group and 3-year OS (P = 0.06) and a significant correlation between the LNR group and 3-year DFS (P = 0.03). The optimal LNR cutoff of 10% was significantly correlated with 3-year OS (P = 0.02) and DFS (P = 0.02). The LNR was not an accurate prognostic factor when fewer than 12 lymph nodes were sampled. Clarification and simplification of the LNR classification are prerequisites for use of this system in randomized control trials. An LNR of 10% appears to be the optimal cutoff. PMID:25058763
Sabbagh, Charles; Mauvais, François; Cosse, Cyril; Rebibo, Lionel; Joly, Jean-Paul; Dromer, Didier; Aubert, Christine; Carton, Sophie; Dron, Bernard; Dadamessi, Innocenti; Maes, Bernard; Perrier, Guillaume; Manaouil, David; Fontaine, Jean-François; Gozy, Michel; Panis, Xavier; Foncelle, Pierre Henri; de Fresnoy, Hugues; Leroux, Fabien; Vaneslander, Pierre; Ghighi, Caroline; Regimbeau, Jean-Marc
2014-01-01
Lymph node ratio (LNR) (positive lymph nodes/sampled lymph nodes) is predictive of survival in colon cancer. The aim of the present study was to validate the LNR as a prognostic factor and to determine the optimum LNR cutoff for distinguishing between "good prognosis" and "poor prognosis" colon cancer patients. From January 2003 to December 2007, patients with TNM stage III colon cancer operated on with at least of 3 years of follow-up and not lost to follow-up were included in this retrospective study. The two primary endpoints were 3-year overall survival (OS) and disease-free survival (DFS) as a function of the LNR groups and the cutoff. One hundred seventy-eight patients were included. There was no correlation between the LNR group and 3-year OS (P=0.06) and a significant correlation between the LNR group and 3-year DFS (P=0.03). The optimal LNR cutoff of 10% was significantly correlated with 3-year OS (P=0.02) and DFS (P=0.02). The LNR was not an accurate prognostic factor when fewer than 12 lymph nodes were sampled. Clarification and simplification of the LNR classification are prerequisites for use of this system in randomized control trials. An LNR of 10% appears to be the optimal cutoff.
An intelligent allocation algorithm for parallel processing
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.
1988-01-01
The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.
Zizys, Darius; Gaidys, Rimvydas; Dauksevicius, Rolanas; Ostasevicius, Vytautas; Daniulaitis, Vytautas
2015-01-01
The piezoelectric transduction mechanism is a common vibration-to-electric energy harvesting approach. Piezoelectric energy harvesters are typically mounted on a vibrating host structure, whereby alternating voltage output is generated by a dynamic strain field. A design target in this case is to match the natural frequency of the harvester to the ambient excitation frequency for the device to operate in resonance mode, thus significantly increasing vibration amplitudes and, as a result, energy output. Other fundamental vibration modes have strain nodes, where the dynamic strain field changes sign in the direction of the cantilever length. The paper reports on a dimensionless numerical transient analysis of a cantilever of a constant cross-section and an optimally-shaped cantilever with the objective to accurately predict the position of a strain node. Total effective strain produced by both cantilevers segmented at the strain node is calculated via transient analysis and compared to the strain output produced by the cantilevers segmented at strain nodes obtained from modal analysis, demonstrating a 7% increase in energy output. Theoretical results were experimentally verified by using open-circuit voltage values measured for the cantilevers segmented at optimal and suboptimal segmentation lines. PMID:26703623
Zizys, Darius; Gaidys, Rimvydas; Dauksevicius, Rolanas; Ostasevicius, Vytautas; Daniulaitis, Vytautas
2015-12-23
The piezoelectric transduction mechanism is a common vibration-to-electric energy harvesting approach. Piezoelectric energy harvesters are typically mounted on a vibrating host structure, whereby alternating voltage output is generated by a dynamic strain field. A design target in this case is to match the natural frequency of the harvester to the ambient excitation frequency for the device to operate in resonance mode, thus significantly increasing vibration amplitudes and, as a result, energy output. Other fundamental vibration modes have strain nodes, where the dynamic strain field changes sign in the direction of the cantilever length. The paper reports on a dimensionless numerical transient analysis of a cantilever of a constant cross-section and an optimally-shaped cantilever with the objective to accurately predict the position of a strain node. Total effective strain produced by both cantilevers segmented at the strain node is calculated via transient analysis and compared to the strain output produced by the cantilevers segmented at strain nodes obtained from modal analysis, demonstrating a 7% increase in energy output. Theoretical results were experimentally verified by using open-circuit voltage values measured for the cantilevers segmented at optimal and suboptimal segmentation lines.
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.
Cheng, Jing; Xia, Linyuan
2016-08-31
Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network
Cheng, Jing; Xia, Linyuan
2016-01-01
Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Anatomical classification of breast sentinel lymph nodes using computed tomography-lymphography.
Fujita, Tamaki; Miura, Hiroyuki; Seino, Hiroko; Ono, Shuichi; Nishi, Takashi; Nishimura, Akimasa; Hakamada, Kenichi; Aoki, Masahiko
2018-05-03
To evaluate the anatomical classification and location of breast sentinel lymph nodes, preoperative computed tomography-lymphography examinations were retrospectively reviewed for sentinel lymph nodes in 464 cases clinically diagnosed with node-negative breast cancer between July 2007 and June 2016. Anatomical classification was performed based on the numbers of lymphatic routes and sentinel lymph nodes, the flow direction of lymphatic routes, and the location of sentinel lymph nodes. Of the 464 cases reviewed, anatomical classification could be performed in 434 (93.5 %). The largest number of cases showed single route/single sentinel lymph node (n = 296, 68.2 %), followed by multiple routes/multiple sentinel lymph nodes (n = 59, 13.6 %), single route/multiple sentinel lymph nodes (n = 53, 12.2 %), and multiple routes/single sentinel lymph node (n = 26, 6.0 %). Classification based on the flow direction of lymphatic routes showed that 429 cases (98.8 %) had outward flow on the superficial fascia toward axillary lymph nodes, whereas classification based on the height of sentinel lymph nodes showed that 323 cases (74.4 %) belonged to the upper pectoral group of axillary lymph nodes. There was wide variation in the number of lymphatic routes and their branching patterns and in the number, location, and direction of flow of sentinel lymph nodes. It is clinically very important to preoperatively understand the anatomical morphology of lymphatic routes and sentinel lymph nodes for optimal treatment of breast cancer, and computed tomography-lymphography is suitable for this purpose.
Energy optimization in mobile sensor networks
NASA Astrophysics Data System (ADS)
Yu, Shengwei
Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.
NASA Technical Reports Server (NTRS)
Seldner, K.
1977-01-01
An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.
Mechanistic Understanding of the Plasmonic Enhancement for Solar Water Splitting.
Zhang, Peng; Wang, Tuo; Gong, Jinlong
2015-09-23
H2 generation by solar water splitting is one of the most promising solutions to meet the increasing energy demands of the fast developing society. However, the efficiency of solar-water-splitting systems is still too low for practical applications, which requires further enhancement via different strategies such as doping, construction of heterojunctions, morphology control, and optimization of the crystal structure. Recently, integration of plasmonic metals to semiconductor photocatalysts has been proved to be an effective way to improve their photocatalytic activities. Thus, in-depth understanding of the enhancement mechanisms is of great importance for better utilization of the plasmonic effect. This review describes the relevant mechanisms from three aspects, including: i) light absorption and scattering; ii) hot-electron injection and iii) plasmon-induced resonance energy transfer (PIRET). Perspectives are also proposed to trigger further innovative thinking on plasmonic-enhanced solar water splitting. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xia, Yangkun; Fu, Zhuo; Tsai, Sang-Bing; Wang, Jiangtao
2018-05-10
In order to promote the development of low-carbon logistics and economize logistics distribution costs, the vehicle routing problem with split deliveries by backpack is studied. With the help of the model of classical capacitated vehicle routing problem, in this study, a form of discrete split deliveries was designed in which the customer demand can be split only by backpack. A double-objective mathematical model and the corresponding adaptive tabu search (TS) algorithm were constructed for solving this problem. By embedding the adaptive penalty mechanism, and adopting the random neighborhood selection strategy and reinitialization principle, the global optimization ability of the new algorithm was enhanced. Comparisons with the results in the literature show the effectiveness of the proposed algorithm. The proposed method can save the costs of low-carbon logistics and reduce carbon emissions, which is conducive to the sustainable development of low-carbon logistics.
Opitz, Alexander K; Nenning, Andreas; Rameshan, Christoph; Rameshan, Raffael; Blume, Raoul; Hävecker, Michael; Knop-Gericke, Axel; Rupprechter, Günther; Fleig, Jürgen; Klötzer, Bernhard
2015-01-01
In the search for optimized cathode materials for high-temperature electrolysis, mixed conducting oxides are highly promising candidates. This study deals with fundamentally novel insights into the relation between surface chemistry and electrocatalytic activity of lanthanum ferrite based electrolysis cathodes. For this means, near-ambient-pressure X-ray photoelectron spectroscopy (NAP-XPS) and impedance spectroscopy experiments were performed simultaneously on electrochemically polarized La0.6Sr0.4FeO3−δ (LSF) thin film electrodes. Under cathodic polarization the formation of Fe0 on the LSF surface could be observed, which was accompanied by a strong improvement of the electrochemical water splitting activity of the electrodes. This correlation suggests a fundamentally different water splitting mechanism in presence of the metallic iron species and may open novel paths in the search for electrodes with increased water splitting activity. PMID:25557533
OpenGeoSys-GEMS: Hybrid parallelization of a reactive transport code with MPI and threads
NASA Astrophysics Data System (ADS)
Kosakowski, G.; Kulik, D. A.; Shao, H.
2012-04-01
OpenGeoSys-GEMS is a generic purpose reactive transport code based on the operator splitting approach. The code couples the Finite-Element groundwater flow and multi-species transport modules of the OpenGeoSys (OGS) project (http://www.ufz.de/index.php?en=18345) with the GEM-Selektor research package to model thermodynamic equilibrium of aquatic (geo)chemical systems utilizing the Gibbs Energy Minimization approach (http://gems.web.psi.ch/). The combination of OGS and the GEM-Selektor kernel (GEMS3K) is highly flexible due to the object-oriented modular code structures and the well defined (memory based) data exchange modules. Like other reactive transport codes, the practical applicability of OGS-GEMS is often hampered by the long calculation time and large memory requirements. • For realistic geochemical systems which might include dozens of mineral phases and several (non-ideal) solid solutions the time needed to solve the chemical system with GEMS3K may increase exceptionally. • The codes are coupled in a sequential non-iterative loop. In order to keep the accuracy, the time step size is restricted. In combination with a fine spatial discretization the time step size may become very small which increases calculation times drastically even for small 1D problems. • The current version of OGS is not optimized for memory use and the MPI version of OGS does not distribute data between nodes. Even for moderately small 2D problems the number of MPI processes that fit into memory of up-to-date workstations or HPC hardware is limited. One strategy to overcome the above mentioned restrictions of OGS-GEMS is to parallelize the coupled code. For OGS a parallelized version already exists. It is based on a domain decomposition method implemented with MPI and provides a parallel solver for fluid and mass transport processes. In the coupled code, after solving fluid flow and solute transport, geochemical calculations are done in form of a central loop over all finite element nodes with calls to GEMS3K and consecutive calculations of changed material parameters. In a first step the existing MPI implementation was utilized to parallelize this loop. Calculations were split between the MPI processes and afterwards data was synchronized by using MPI communication routines. Furthermore, multi-threaded calculation of the loop was implemented with help of the boost thread library (http://www.boost.org). This implementation provides a flexible environment to distribute calculations between several threads. For each MPI process at least one and up to several dozens of worker threads are spawned. These threads do not replicate the complete OGS-GEM data structure and use only a limited amount of memory. Calculation of the central geochemical loop is shared between all threads. Synchronization between the threads is done by barrier commands. The overall number of local threads times MPI processes should match the number of available computing nodes. The combination of multi-threading and MPI provides an effective and flexible environment to speed up OGS-GEMS calculations while limiting the required memory use. Test calculations on different hardware show that for certain types of applications tremendous speedups are possible.
Coupling effect of nodes popularity and similarity on social network persistence
Jin, Xiaogang; Jin, Cheng; Huang, Jiaxuan; Min, Yong
2017-01-01
Network robustness represents the ability of networks to withstand failures and perturbations. In social networks, maintenance of individual activities, also called persistence, is significant towards understanding robustness. Previous works usually consider persistence on pre-generated network structures; while in social networks, the network structure is growing with the cascading inactivity of existed individuals. Here, we address this challenge through analysis for nodes under a coevolution model, which characterizes individual activity changes under three network growth modes: following the descending order of nodes’ popularity, similarity or uniform random. We show that when nodes possess high spontaneous activities, a popularity-first growth mode obtains highly persistent networks; otherwise, with low spontaneous activities, a similarity-first mode does better. Moreover, a compound growth mode, with the consecutive joining of similar nodes in a short period and mixing a few high popularity nodes, obtains the highest persistence. Therefore, nodes similarity is essential for persistent social networks, while properly coupling popularity with similarity further optimizes the persistence. This demonstrates the evolution of nodes activity not only depends on network topology, but also their connective typology. PMID:28220840
Coupling effect of nodes popularity and similarity on social network persistence
NASA Astrophysics Data System (ADS)
Jin, Xiaogang; Jin, Cheng; Huang, Jiaxuan; Min, Yong
2017-02-01
Network robustness represents the ability of networks to withstand failures and perturbations. In social networks, maintenance of individual activities, also called persistence, is significant towards understanding robustness. Previous works usually consider persistence on pre-generated network structures; while in social networks, the network structure is growing with the cascading inactivity of existed individuals. Here, we address this challenge through analysis for nodes under a coevolution model, which characterizes individual activity changes under three network growth modes: following the descending order of nodes’ popularity, similarity or uniform random. We show that when nodes possess high spontaneous activities, a popularity-first growth mode obtains highly persistent networks; otherwise, with low spontaneous activities, a similarity-first mode does better. Moreover, a compound growth mode, with the consecutive joining of similar nodes in a short period and mixing a few high popularity nodes, obtains the highest persistence. Therefore, nodes similarity is essential for persistent social networks, while properly coupling popularity with similarity further optimizes the persistence. This demonstrates the evolution of nodes activity not only depends on network topology, but also their connective typology.
Optimized scalable network switch
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2007-12-04
In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.
Optimized scalable network switch
Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.
2010-02-23
In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.
Using LTI Dynamics to Identify the Influential Nodes in a Network
Jorswieck, Eduard; Scheunert, Christian
2016-01-01
Networks are used for modeling numerous technical, social or biological systems. In order to better understand the system dynamics, it is a matter of great interest to identify the most important nodes within the network. For a large set of problems, whether it is the optimal use of available resources, spreading information efficiently or even protection from malicious attacks, the most important node is the most influential spreader, the one that is capable of propagating information in the shortest time to a large portion of the network. Here we propose the Node Imposed Response (NiR), a measure which accurately evaluates node spreading power. It outperforms betweenness, degree, k-shell and h-index centrality in many cases and shows the similar accuracy to dynamics-sensitive centrality. We utilize the system-theoretic approach considering the network as a Linear Time-Invariant system. By observing the system response we can quantify the importance of each node. In addition, our study provides a robust tool set for various protective strategies. PMID:28030548
Efficient Mobility Management Signalling in Network Mobility Supported PMIPV6
Jebaseeli Samuelraj, Ananthi; Jayapal, Sundararajan
2015-01-01
Proxy Mobile IPV6 (PMIPV6) is a network based mobility management protocol which supports node's mobility without the contribution from the respective mobile node. PMIPV6 is initially designed to support individual node mobility and it should be enhanced to support mobile network movement. NEMO-BSP is an existing protocol to support network mobility (NEMO) in PMIPV6 network. Due to the underlying differences in basic protocols, NEMO-BSP cannot be directly applied to PMIPV6 network. Mobility management signaling and data structures used for individual node's mobility should be modified to support group nodes' mobility management efficiently. Though a lot of research work is in progress to implement mobile network movement in PMIPV6, it is not yet standardized and each suffers with different shortcomings. This research work proposes modifications in NEMO-BSP and PMIPV6 to achieve NEMO support in PMIPV6. It mainly concentrates on optimizing the number and size of mobility signaling exchanged while mobile network or mobile network node changes its access point. PMID:26366431
Efficient Mobility Management Signalling in Network Mobility Supported PMIPV6.
Samuelraj, Ananthi Jebaseeli; Jayapal, Sundararajan
2015-01-01
Proxy Mobile IPV6 (PMIPV6) is a network based mobility management protocol which supports node's mobility without the contribution from the respective mobile node. PMIPV6 is initially designed to support individual node mobility and it should be enhanced to support mobile network movement. NEMO-BSP is an existing protocol to support network mobility (NEMO) in PMIPV6 network. Due to the underlying differences in basic protocols, NEMO-BSP cannot be directly applied to PMIPV6 network. Mobility management signaling and data structures used for individual node's mobility should be modified to support group nodes' mobility management efficiently. Though a lot of research work is in progress to implement mobile network movement in PMIPV6, it is not yet standardized and each suffers with different shortcomings. This research work proposes modifications in NEMO-BSP and PMIPV6 to achieve NEMO support in PMIPV6. It mainly concentrates on optimizing the number and size of mobility signaling exchanged while mobile network or mobile network node changes its access point.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Jiang, Ailian; Zheng, Lihong
2018-03-29
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.
2018-01-01
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336
Cho, Sunghyun; Choi, Ji-Woong; You, Cheolwoo
2013-10-02
Mobile wireless multimedia sensor networks (WMSNs), which consist of mobile sink or sensor nodes and use rich sensing information, require much faster and more reliable wireless links than static wireless sensor networks (WSNs). This paper proposes an adaptive multi-node (MN) multiple input and multiple output (MIMO) transmission to improve the transmission reliability and capacity of mobile sink nodes when they experience spatial correlation. Unlike conventional single-node (SN) MIMO transmission, the proposed scheme considers the use of transmission antennas from more than two sensor nodes. To find an optimal antenna set and a MIMO transmission scheme, a MN MIMO channel model is introduced first, followed by derivation of closed-form ergodic capacity expressions with different MIMO transmission schemes, such as space-time transmit diversity coding and spatial multiplexing. The capacity varies according to the antenna correlation and the path gain from multiple sensor nodes. Based on these statistical results, we propose an adaptive MIMO mode and antenna set switching algorithm that maximizes the ergodic capacity of mobile sink nodes. The ergodic capacity of the proposed scheme is compared with conventional SN MIMO schemes, where the gain increases as the antenna correlation and path gain ratio increase.
Cho, Sunghyun; Choi, Ji-Woong; You, Cheolwoo
2013-01-01
Mobile wireless multimedia sensor networks (WMSNs), which consist of mobile sink or sensor nodes and use rich sensing information, require much faster and more reliable wireless links than static wireless sensor networks (WSNs). This paper proposes an adaptive multi-node (MN) multiple input and multiple output (MIMO) transmission to improve the transmission reliability and capacity of mobile sink nodes when they experience spatial correlation. Unlike conventional single-node (SN) MIMO transmission, the proposed scheme considers the use of transmission antennas from more than two sensor nodes. To find an optimal antenna set and a MIMO transmission scheme, a MN MIMO channel model is introduced first, followed by derivation of closed-form ergodic capacity expressions with different MIMO transmission schemes, such as space-time transmit diversity coding and spatial multiplexing. The capacity varies according to the antenna correlation and the path gain from multiple sensor nodes. Based on these statistical results, we propose an adaptive MIMO mode and antenna set switching algorithm that maximizes the ergodic capacity of mobile sink nodes. The ergodic capacity of the proposed scheme is compared with conventional SN MIMO schemes, where the gain increases as the antenna correlation and path gain ratio increase. PMID:24152920
NASA Astrophysics Data System (ADS)
Huang, Y. G.; Wang, L. G.; Lu, Y. L.; Chen, J. R.; Zhang, J. H.
2015-09-01
Based on the two-dimensional elasticity theory, this study established a mechanical model under chordally opposing distributed compressive loads, in order to perfect the theoretical foundation of the flattened Brazilian splitting test used for measuring the indirect tensile strength of rocks. The stress superposition method was used to obtain the approximate analytic solutions of stress components inside the flattened Brazilian disk. These analytic solutions were then verified through a comparison with the numerical results of the finite element method (FEM). Based on the theoretical derivation, this research carried out a contrastive study on the effect of the flattened loading angles on the stress value and stress concentration degree inside the disk. The results showed that the stress concentration degree near the loading point and the ratio of compressive/tensile stress inside the disk dramatically decreased as the flattened loading angle increased, avoiding the crushing failure near-loading point of Brazilian disk specimens. However, only the tensile stress value and the tensile region were slightly reduced with the increase of the flattened loading angle. Furthermore, this study found that the optimal flattened loading angle was 20°-30°; flattened load angles that were too large or too small made it difficult to guarantee the central tensile splitting failure principle of the Brazilian splitting test. According to the Griffith strength failure criterion, the calculative formula of the indirect tensile strength of rocks was derived theoretically. This study obtained a theoretical indirect tensile strength that closely coincided with existing and experimental results. Finally, this paper simulated the fracture evolution process of rocks under different loading angles through the use of the finite element numerical software ANSYS. The modeling results showed that the Flattened Brazilian Splitting Test using the optimal loading angle could guarantee the tensile splitting failure initiated by a central crack.
1987-07-01
the initial tank water level in feet. If any fires were specified, TIMEOT prints the node number supplying the fire flow, NFIRE ; the starting time step...0 0 LPT Array 1 22 10 11 0.100000000E+04 NF, NFIRE , ITF, IDF. FF 3 25 0.130000000E+03 0.800000000E+02 0.20000000000E+04 0.1100000000E+03 NTN; UPL...of input error. ERROR NF RETIME Number of fires in system. STOTIM NFIRE (5) /TIME/ User node supplying fire flow. NFIRE (1)=6 indicates user node 6 is
Tool wear modeling using abductive networks
NASA Astrophysics Data System (ADS)
Masory, Oren
1992-09-01
A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.
Chen, Qihong; Long, Rong; Quan, Shuhai
2014-01-01
This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206
NASA Astrophysics Data System (ADS)
Ahmadivand, Arash; Golmohammadi, Saeed
2014-01-01
With the purpose of guiding and splitting of optical power at C-band spectrum, we studied Y-shape splitters based on various shapes of nanoparticles as a plasmon waveguide. We applied different configurations of Gold (Au) and Silver (Ag) nanoparticles including spheres, rods and rings, to optimize the efficiency and losses of two and four-branch splitters. The best performance in light transportation specifically at telecom wavelength (λ≈1550 nm) is achieved by nanorings, due to an extra degree of freedom in their geometrical components. In addition, comparisons of several values for offset distance (doffset) of examined structures shows that Au nanoring splitters with feasible lower doffset have high quality in guiding and splitting of light through the structure. Finally, we studied four-branch Y-splitters based on Au and Ag nanorings with least possible offset distances to optimize the splitter performance. The power transmission as a key element is calculated for examined structures.
Hamer, Ann M; Hartung, Daniel M; Haxby, Dean G; Ketchum, Kathy L; Pollack, David A
2006-01-01
One method to reduce drug costs is to promote dose form optimization strategies that take advantage of the flat pricing of some drugs, i.e., the same or nearly the same price for a 100 mg tablet and a 50 mg tablet of the same drug. Dose form optimization includes tablet splitting; taking half of a higher-strength tablet; and dose form consolidation, using 1 higher-strength tablet instead of 2 lower-strength tablets. Dose form optimization can reduce the direct cost of therapy by up to 50% while continuing the same daily dose of the same drug molecule. To determine if voluntary prescription change forms for antidepressant drugs could induce dosing changes and reduce the cost of antidepressant therapy in a Medicaid population. Specific regimens of 4 selective serotonin reuptake inhibitors (SSRIs)- citalopram, escitalopram, paroxetine, and sertraline- were identified for conversion to half tablets or dose optimization. Change forms, which served as valid prescriptions, were faxed to Oregon prescribers in October 2004. The results from both the returned forms and subsequent drug claims data were evaluated using a segmented linear regression. Citalopram claims were excluded from the cost analysis because the drug became available in generic form in October 2004. A total of 1,582 change forms were sent to 556 unique prescribers; 9.2% of the change forms were for dose consolidation and 90.8% were for tablet splitting. Of the 1,118 change forms (70.7%) that were returned, 956 (60.4% of those sent and 85.5% of those returned) authorized a prescription change to a lower-cost dose regimen. The average drug cost per day declined by 14.2%, from Dollars 2.26 to Dollars 1.94 in the intervention group, versus a 1.6% increase, from Dollars 2.52 to Dollars 2.56, in the group without dose consolidation or tablet splitting of the 3 SSRIs (sertraline, escitalopram, and immediate-release paroxetine). Total drug cost for the 3 SSRIs declined by 35.6%, from Dollars 333,567 to Dollars 214,794, as a result of a 24.8% decline in the total days of SSRI drug therapy and the 14.2% decline in average SSRI drug cost per day. The estimated monthly cost avoidance from this intervention, based on pharmacy claims data, was approximately Dollars 35,285, about 2% of the entire spending on SSRI drugs each month, or about Dollars 0.09 per member per month. Program administration costs, excluding costs incurred by prescribers and pharmacy providers, were about 2% of SSRI drug cost savings. Voluntary prescription change forms appear to be an effective and well-accepted tool for obtaining dose form optimization through dose form consolidation and tablet splitting, resulting in reduction in the direct costs of SSRI antidepressant drug therapy with minimal additional program administration costs.
Intra-lymph node injection of biodegradable polymer particles.
Andorko, James I; Tostanoski, Lisa H; Solano, Eduardo; Mukhamedova, Maryam; Jewell, Christopher M
2014-01-02
Generation of adaptive immune response relies on efficient drainage or trafficking of antigen to lymph nodes for processing and presentation of these foreign molecules to T and B lymphocytes. Lymph nodes have thus become critical targets for new vaccines and immunotherapies. A recent strategy for targeting these tissues is direct lymph node injection of soluble vaccine components, and clinical trials involving this technique have been promising. Several biomaterial strategies have also been investigated to improve lymph node targeting, for example, tuning particle size for optimal drainage of biomaterial vaccine particles. In this paper we present a new method that combines direct lymph node injection with biodegradable polymer particles that can be laden with antigen, adjuvant, or other vaccine components. In this method polymeric microparticles or nanoparticles are synthesized by a modified double emulsion protocol incorporating lipid stabilizers. Particle properties (e.g. size, cargo loading) are confirmed by laser diffraction and fluorescent microscopy, respectively. Mouse lymph nodes are then identified by peripheral injection of a nontoxic tracer dye that allows visualization of the target injection site and subsequent deposition of polymer particles in lymph nodes. This technique allows direct control over the doses and combinations of biomaterials and vaccine components delivered to lymph nodes and could be harnessed in the development of new biomaterial-based vaccines.
Scanning elastic scattering spectroscopy detects metastatic breast cancer in sentinel lymph nodes
NASA Astrophysics Data System (ADS)
Austwick, Martin R.; Clark, Benjamin; Mosse, Charles A.; Johnson, Kristie; Chicken, D. Wayne; Somasundaram, Santosh K.; Calabro, Katherine W.; Zhu, Ying; Falzon, Mary; Kocjan, Gabrijela; Fearn, Tom; Bown, Stephen G.; Bigio, Irving J.; Keshtgar, Mohammed R. S.
2010-07-01
A novel method for rapidly detecting metastatic breast cancer within excised sentinel lymph node(s) of the axilla is presented. Elastic scattering spectroscopy (ESS) is a point-contact technique that collects broadband optical spectra sensitive to absorption and scattering within the tissue. A statistical discrimination algorithm was generated from a training set of nearly 3000 clinical spectra and used to test clinical spectra collected from an independent set of nodes. Freshly excised nodes were bivalved and mounted under a fiber-optic plate. Stepper motors raster-scanned a fiber-optic probe over the plate to interrogate the node's cut surface, creating a 20×20 grid of spectra. These spectra were analyzed to create a map of cancer risk across the node surface. Rules were developed to convert these maps to a prediction for the presence of cancer in the node. Using these analyses, a leave-one-out cross-validation to optimize discrimination parameters on 128 scanned nodes gave a sensitivity of 69% for detection of clinically relevant metastases (71% for macrometastases) and a specificity of 96%, comparable to literature results for touch imprint cytology, a standard technique for intraoperative diagnosis. ESS has the advantage of not requiring a pathologist to review the tissue sample.
A new evolutionary system for evolving artificial neural networks.
Yao, X; Liu, Y
1997-01-01
This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANN's is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems, the Australian credit card assessment problem, and the Mackey-Glass time series prediction problem. The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms.
Optimizing Balanced Incomplete Block Designs for Educational Assessments
ERIC Educational Resources Information Center
van der Linden, Wim J.; Veldkamp, Bernard P.; Carlson, James E.
2004-01-01
A popular design in large-scale educational assessments as well as any other type of survey is the balanced incomplete block design. The design is based on an item pool split into a set of blocks of items that are assigned to sets of "assessment booklets." This article shows how the problem of calculating an optimal balanced incomplete block…
Modeling complexity in engineered infrastructure system: Water distribution network as an example
NASA Astrophysics Data System (ADS)
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Are Ducted Mini-Splits Worth It?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, Jonathan M; Maguire, Jeffrey B; Metzger, Cheryn E.
Ducted mini-split heat pumps are gaining popularity in some regions of the country due to their energy-efficient specifications and their ability to be hidden from sight. Although product and install costs are typically higher than the ductless mini-split heat pumps, this technology is well worth the premium for some homeowners who do not like to see an indoor unit in their living area. Due to the interest in this technology by local utilities and homeowners, the Bonneville Power Administration (BPA) has funded the Pacific Northwest National Laboratory (PNNL) and the National Renewable Energy Laboratory (NREL) to develop capabilities within themore » Building Energy Optimization (BEopt) tool to model ducted mini-split heat pumps. After the fundamental capabilities were added, energy-use results could be compared to other technologies that were already in BEopt, such as zonal electric resistance heat, central air source heat pumps, and ductless mini-split heat pumps. Each of these technologies was then compared using five prototype configurations in three different BPA heating zones to determine how the ducted mini-split technology would perform under different scenarios. The result of this project was a set of EnergyPlus models representing the various prototype configurations in each climate zone. Overall, the ducted mini-split heat pumps saved about 33-60% compared to zonal electric resistance heat (with window AC systems modeled in the summer). The results also showed that the ducted mini-split systems used about 4% more energy than the ductless mini-split systems, which saved about 37-64% compared to electric zonal heat (depending on the prototype and climate).« less
Hierarchical auto-configuration addressing in mobile ad hoc networks (HAAM)
NASA Astrophysics Data System (ADS)
Ram Srikumar, P.; Sumathy, S.
2017-11-01
Addressing plays a vital role in networking to identify devices uniquely. A device must be assigned with a unique address in order to participate in the data communication in any network. Different protocols defining different types of addressing are proposed in literature. Address auto-configuration is a key requirement for self organizing networks. Existing auto-configuration based addressing protocols require broadcasting probes to all the nodes in the network before assigning a proper address to a new node. This needs further broadcasts to reflect the status of the acquired address in the network. Such methods incur high communication overheads due to repetitive flooding. To address this overhead, a new partially stateful address allocation scheme, namely Hierarchical Auto-configuration Addressing (HAAM) scheme is extended and proposed. Hierarchical addressing basically reduces latency and overhead caused during address configuration. Partially stateful addressing algorithm assigns addresses without the need for flooding and global state awareness, which in turn reduces the communication overhead and space complexity respectively. Nodes are assigned addresses hierarchically to maintain the graph of the network as a spanning tree which helps in effectively avoiding the broadcast storm problem. Proposed algorithm for HAAM handles network splits and merges efficiently in large scale mobile ad hoc networks incurring low communication overheads.
Performance evaluation of a bigrating as a beam splitter.
Hwang, R B; Peng, S T
1997-04-01
The design of a bigrating for use as a beam splitter is presented. It is based on a rigorous formulation of plane-wave scattering by a bigrating that is composed of two individual gratings oriented in different directions. Numerical results are carried out to optimize the design of a bigrating to perform 1 x 4 beam splitting in two dimensions and to examine its fabrication and operation tolerances. It is found that a bigrating can be designed to perform two functions: beam splitting and polarization purification.
NASA Astrophysics Data System (ADS)
Zhang, Donghao; Matsuura, Haruki; Asada, Akiko
2017-04-01
Some automobile factories have segmented mixed-model production lines into shorter sub-lines according to part group, such as engine, trim, and powertrain. The effects of splitting a line into sub-lines have been reported from the standpoints of worker motivation, productivity improvement, and autonomy based on risk spreading. There has been no mention of the possibility of shortening the line length by altering the product sequence using sub-lines. The purpose of the present paper is to determine the conditions under which sub-lines reduce the line length and the degree to which the line length may be shortened. The line lengths for a non-split line and a line that has been split into sub-lines are compared using three methods for determining the working area, the standard closed boundary, the optimized open boundary, and real-life constant-length stations. The results are discussed by analyzing the upper and lower bounds of the line length. Based on these results, a procedure for deciding whether or not to split a production line is proposed.
Finite element model for brittle fracture and fragmentation
Li, Wei; Delaney, Tristan J.; Jiao, Xiangmin; ...
2016-06-01
A new computational model for brittle fracture and fragmentation has been developed based on finite element analysis of non-linear elasticity equations. The proposed model propagates the cracks by splitting the mesh nodes alongside the most over-strained edges based on the principal direction of strain tensor. To prevent elements from overlapping and folding under large deformations, robust geometrical constraints using the method of Lagrange multipliers have been incorporated. In conclusion, the model has been applied to 2D simulations of the formation and propagation of cracks in brittle materials, and the fracture and fragmentation of stretched and compressed materials.
Finite element model for brittle fracture and fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wei; Delaney, Tristan J.; Jiao, Xiangmin
A new computational model for brittle fracture and fragmentation has been developed based on finite element analysis of non-linear elasticity equations. The proposed model propagates the cracks by splitting the mesh nodes alongside the most over-strained edges based on the principal direction of strain tensor. To prevent elements from overlapping and folding under large deformations, robust geometrical constraints using the method of Lagrange multipliers have been incorporated. In conclusion, the model has been applied to 2D simulations of the formation and propagation of cracks in brittle materials, and the fracture and fragmentation of stretched and compressed materials.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
The new management policy: Indonesian PSC-Gross split applied on CO2 flooding project
NASA Astrophysics Data System (ADS)
Irham, S.; Sibuea, S. N.; Danu, A.
2018-01-01
“SIAD” oil field will be developed by CO2 flooding. CO2, a famous pollutant gas, is injected into the oil reservoir to optimize the oil recovery. This technique should be conducted economically according to the energy management policy in Indonesia. In general, Indonesia has two policy contracts on oil and gas: the old one is PSC-Cost-Recovery, and the new one is PSC-Gross-Split (introduced in 2017 as the new energy management plan). The contractor must choose between PSC-Cost-Recovery and PSC-Gross-Split which makes more profit. The aim of this paper is to show the best oil and gas contract policy for the contractor. The methods are calculating and comparing the economic indicators. The result of this study are (1) NPV for the PSC-Cost-Recovery is -46 MUS, while for the PSC-Gross-Split is 73 MUS, and (2) IRR for the PSC-Cost-Recovery is 9%, whereas for the PSC-Gross-Split is 11%. The conclusion is that the NPV and IRR for PSC-Gross-Split are greater than the NPV and IRR of PSC-Cost-Recovery, but POT in PSC-Gross-split is longer than POT in PSC-Cost-Recovery. Thus, in this case, the new energy policy contract can be applied for CO2 flooding technology since it yields higher economic indicators than its antecendent.
Popularity versus similarity in growing networks
NASA Astrophysics Data System (ADS)
Krioukov, Dmitri; Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, Mariangeles; Boguna, Marian
2012-02-01
Preferential attachment is a powerful mechanism explaining the emergence of scaling in growing networks. If new connections are established preferentially to more popular nodes in a network, then the network is scale-free. Here we show that not only popularity but also similarity is a strong force shaping the network structure and dynamics. We develop a framework where new connections, instead of preferring popular nodes, optimize certain trade-offs between popularity and similarity. The framework admits a geometric interpretation, in which preferential attachment emerges from local optimization processes. As opposed to preferential attachment, the optimization framework accurately describes large-scale evolution of technological (Internet), social (web of trust), and biological (E.coli metabolic) networks, predicting the probability of new links in them with a remarkable precision. The developed framework can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.
EUV process establishment through litho and etch for N7 node
NASA Astrophysics Data System (ADS)
Kuwahara, Yuhei; Kawakami, Shinichiro; Kubota, Minoru; Matsunaga, Koichi; Nafus, Kathleen; Foubert, Philippe; Mao, Ming
2016-03-01
Extreme ultraviolet lithography (EUVL) technology is steadily reaching high volume manufacturing for 16nm half pitch node and beyond. However, some challenges, for example scanner availability and resist performance (resolution, CD uniformity (CDU), LWR, etch behavior and so on) are remaining. Advance EUV patterning on the ASML NXE:3300/ CLEAN TRACK LITHIUS Pro Z- EUV litho cluster is launched at imec, allowing for finer pitch patterns for L/S and CH. Tokyo Electron Ltd. and imec are continuously collabo rating to develop manufacturing quality POR processes for NXE:3300. TEL's technologies to enhance CDU, defectivity and LWR/LER can improve patterning performance. The patterning is characterized and optimized in both litho and etch for a more complete understanding of the final patterning performance. This paper reports on post-litho CDU improvement by litho process optimization and also post-etch LWR reduction by litho and etch process optimization.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Nonreciprocal signal routing in an active quantum network
NASA Astrophysics Data System (ADS)
Metelmann, A.; Türeci, H. E.
2018-04-01
As superconductor quantum technologies are moving towards large-scale integrated circuits, a robust and flexible approach to routing photons at the quantum level becomes a critical problem. Active circuits, which contain parametrically driven elements selectively embedded in the circuit, offer a viable solution. Here, we present a general strategy for routing nonreciprocally quantum signals between two sites of a given lattice of oscillators, implementable with existing superconducting circuit components. Our approach makes use of a dual lattice of overdamped oscillators linking the nodes of the main lattice. Solutions for spatially selective driving of the lattice elements can be found, which optimally balance coherent and dissipative hopping of microwave photons to nonreciprocally route signals between two given nodes. In certain lattices these optimal solutions are obtained at the exceptional point of the dynamical matrix of the network. We also demonstrate that signal and noise transmission characteristics can be separately optimized.
Optimization and Planning of Emergency Evacuation Routes Considering Traffic Control
Zhang, Lijun; Wang, Zhaohua
2014-01-01
Emergencies, especially major ones, happen fast, randomly, as well as unpredictably, and generally will bring great harm to people's life and the economy. Therefore, governments and lots of professionals devote themselves to taking effective measures and providing optimal evacuation plans. This paper establishes two different emergency evacuation models on the basis of the maximum flow model (MFM) and the minimum-cost maximum flow model (MC-MFM), and proposes corresponding algorithms for the evacuation from one source node to one designated destination (one-to-one evacuation). Ulteriorly, we extend our evaluation model from one source node to many designated destinations (one-to-many evacuation). At last, we make case analysis of evacuation optimization and planning in Beijing, and obtain the desired evacuation routes and effective traffic control measures from the perspective of sufficiency and practicability. Both analytical and numerical results support that our models are feasible and practical. PMID:24991636
A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs
Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian
2018-01-01
In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62–87.77%, decrease delay by 21.09–52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs. PMID:29751589
A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.
Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian
2018-05-03
In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62⁻87.77%, decrease delay by 21.09⁻52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs.
Free-Lagrange methods for compressible hydrodynamics in two space dimensions
NASA Astrophysics Data System (ADS)
Crowley, W. E.
1985-03-01
Since 1970 a research and development program in Free-Lagrange methods has been active at Livermore. The initial steps were taken with incompressible flows for simplicity. Since then the effort has been concentrated on compressible flows with shocks in two space dimensions and time. In general, the line integral method has been used to evaluate derivatives and the artificial viscosity method has been used to deal with shocks. Basically, two Free-Lagrange formulations for compressible flows in two space dimensions and time have been tested and both will be described. In method one, all prognostic quantities were node centered and staggered in time. The artificial viscosity was zone centered. One mesh reconnection philosphy was that the mesh should be optimized so that nearest neighbors were connected together. Another was that vertex angles should tend toward equality. In method one, all mesh elements were triangles. In method two, both quadrilateral and triangular mesh elements are permitted. The mesh variables are staggered in space and time as suggested originally by Richtmyer and von Neumann. The mesh reconnection strategy is entirely different in method two. In contrast to the global strategy of nearest neighbors, we now have a more local strategy that reconnects in order to keep the integration time step above a user chosen threshold. An additional strategy reconnects in the vicinity of large relative fluid motions. Mesh reconnection consists of two parts: (1) the tools that permits nodes to be merged and quads to be split into triangles etc. and; (2) the strategy that dictates how and when to use the tools. Both tools and strategies change with time in a continuing effort to expand the capabilities of the method. New ideas are continually being tried and evaluated.
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.
Jiang, Peng; Xu, Yiming; Liu, Jun
2017-01-19
For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.
NASA Astrophysics Data System (ADS)
Guex, Guillaume
2016-05-01
In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M
2013-01-01
This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
2013-06-03
and a C++ computational backend . The most current version of ORA (3.0.8.5) software is available on the casos website: http://casos.cs.cmu.edu...optimizing a network’s design structure. ORA uses a Java interface for ease of use, and a C++ computational backend . The most current version of ORA...Eigenvector Centrality : Node most connected to other highly connected nodes. Assists in identifying those who can mobilize others Entity Class
Hybrid Techniques for Optimizing Complex Systems
2009-12-01
Service , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of...These vectors are randomly generated, and conventional functional simulation propagates signatures to the internal and output nodes. In a typical...instance, if two internal nodes x and y satisfy the property (y = 1) ⇒ (x = 1), where ⇒ denotes “implies”, then y gives information about x whenever y = 1
An Energy-Efficient Mobile Sink-Based Unequal Clustering Mechanism for WSNs.
Gharaei, Niayesh; Abu Bakar, Kamalrulnizam; Mohd Hashim, Siti Zaiton; Hosseingholi Pourasl, Ali; Siraj, Mohammad; Darwish, Tasneem
2017-08-11
Network lifetime and energy efficiency are crucial performance metrics used to evaluate wireless sensor networks (WSNs). Decreasing and balancing the energy consumption of nodes can be employed to increase network lifetime. In cluster-based WSNs, one objective of applying clustering is to decrease the energy consumption of the network. In fact, the clustering technique will be considered effective if the energy consumed by sensor nodes decreases after applying clustering, however, this aim will not be achieved if the cluster size is not properly chosen. Therefore, in this paper, the energy consumption of nodes, before clustering, is considered to determine the optimal cluster size. A two-stage Genetic Algorithm (GA) is employed to determine the optimal interval of cluster size and derive the exact value from the interval. Furthermore, the energy hole is an inherent problem which leads to a remarkable decrease in the network's lifespan. This problem stems from the asynchronous energy depletion of nodes located in different layers of the network. For this reason, we propose Circular Motion of Mobile-Sink with Varied Velocity Algorithm (CM2SV2) to balance the energy consumption ratio of cluster heads (CH). According to the results, these strategies could largely increase the network's lifetime by decreasing the energy consumption of sensors and balancing the energy consumption among CHs.
Definition and automatic anatomy recognition of lymph node zones in the pelvis on CT images
NASA Astrophysics Data System (ADS)
Liu, Yu; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Guo, Shuxu; Attor, Rosemary; Reinicke, Danica; Torigian, Drew A.
2016-03-01
Currently, unlike IALSC-defined thoracic lymph node zones, no explicitly provided definitions for lymph nodes in other body regions are available. Yet, definitions are critical for standardizing the recognition, delineation, quantification, and reporting of lymphadenopathy in other body regions. Continuing from our previous work in the thorax, this paper proposes a standardized definition of the grouping of pelvic lymph nodes into 10 zones. We subsequently employ our earlier Automatic Anatomy Recognition (AAR) framework designed for body-wide organ modeling, recognition, and delineation to actually implement these zonal definitions where the zones are treated as anatomic objects. First, all 10 zones and key anatomic organs used as anchors are manually delineated under expert supervision for constructing fuzzy anatomy models of the assembly of organs together with the zones. Then, optimal hierarchical arrangement of these objects is constructed for the purpose of achieving the best zonal recognition. For actual localization of the objects, two strategies are used -- optimal thresholded search for organs and one-shot method for the zones where the known relationship of the zones to key organs is exploited. Based on 50 computed tomography (CT) image data sets for the pelvic body region and an equal division into training and test subsets, automatic zonal localization within 1-3 voxels is achieved.
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby
On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less
Mobile Wireless Sensor Networks for Advanced Soil Sensing and Ecosystem Monitoring
NASA Astrophysics Data System (ADS)
Mollenhauer, Hannes; Schima, Robert; Remmler, Paul; Mollenhauer, Olaf; Hutschenreuther, Tino; Toepfer, Hannes; Dietrich, Peter; Bumberger, Jan
2015-04-01
For an adequate characterization of ecosystems it is necessary to detect individual processes with suitable monitoring strategies and methods. Due to the natural complexity of all environmental compartments, single point or temporally and spatially fixed measurements are mostly insufficient for an adequate representation. The application of mobile wireless sensor networks for soil and atmosphere sensing offers significant benefits, due to the simple adjustment of the sensor distribution, the sensor types and the sample rate (e.g. by using optimization approaches or event triggering modes) to the local test conditions. This can be essential for the monitoring of heterogeneous and dynamic environmental systems and processes. One significant advantage in the application of mobile ad-hoc wireless sensor networks is their self-organizing behavior. Thus, the network autonomously initializes and optimizes itself. Due to the localization via satellite a major reduction in installation and operation costs and time is generated. In addition, single point measurements with a sensor are significantly improved by measuring at several optimized points continuously. Since performing analog and digital signal processing and computation in the sensor nodes close to the sensors a significant reduction of the data to be transmitted can be achieved which leads to a better energy management of nodes. Furthermore, the miniaturization of the nodes and energy harvesting are current topics under investigation. First results of field measurements are given to present the potentials and limitations of this application in environmental science. In particular, collected in-situ data with numerous specific soil and atmosphere parameters per sensor node (more than 25) recorded over several days illustrates the high performance of this system for advanced soil sensing and soil-atmosphere interaction monitoring. Moreover, investigations of biotic and abiotic process interactions and optimization of sensor positioning for measuring soil moisture are scopes of this work and initial results of these issues will be presented.
Design and optimization of all-optical networks
NASA Astrophysics Data System (ADS)
Xiao, Gaoxi
1999-10-01
In this thesis, we present our research results on the design and optimization of all-optical networks. We divide our results into the following four parts: 1.In the first part, we consider broadcast-and-select networks. In our research, we propose an alternative and cheaper network configuration to hide the tuning time. In addition, we derive lower bounds on the optimal schedule lengths and prove that they are tighter than the best existing bounds. 2.In the second part, we consider all-optical wide area networks. We propose a set of algorithms for allocating a given number of WCs to the nodes. We adopt a simulation-based optimization approach, in which we collect utilization statistics of WCs from computer simulation and then perform optimization to allocate the WCs. Therefore, our algorithms are widely applicable and they are not restricted to any particular model and assumption. We have conducted extensive computer simulation on regular and irregular networks under both uniform and non-uniform traffic. We see that our method can get nearly the same performance as that of full wavelength conversion by using a much smaller number of WCs. Compared with the best existing method, the results show that our algorithms can significantly reduce (1)the overall blocking probability (i.e., better mean quality of service) and (2)the maximum of the blocking probabilities experienced at all the source nodes (i.e., better fairness). Equivalently, for a given performance requirement on blocking probability, our algorithms can significantly reduce the number of WCs required. 3.In the third part, we design and optimize the physical topology of all-optical wide area networks. We show that the design problem is NP-complete and we propose a heuristic algorithm called two-stage cut saturation algorithm for this problem. Simulation results show that (1)the proposed algorithm can efficiently design networks with low cost and high utilization, and (2)if wavelength converters are available to support full wavelength conversion, the cost of the links can be significantly reduced. 4.In the fourth part, we consider all-optical wide area networks with multiple fibers per link. We design a node configuration for all-optical networks. We exploit the flexibility that, to establish a lightpath across a node, we can select any one of the available channels in the incoming link and any one of the available channels in the outgoing link. As a result, the proposed node configuration requires a small number of small optical switches while it can achieve nearly the same performance as the existing one. And there is no additional crosstalk other than the intrinsic crosstalk within each single-chip optical switch.* (Abstract shortened by UMI.) *Originally published in DAI Vol. 60, No. 2. Reprinted here with corrected author name.
Chan, John D.; McCorvy, John D.; Acharya, Sreemoyee; Day, Timothy A.; Roth, Bryan L.; Marchant, Jonathan S.
2016-01-01
Schistosomiasis is a tropical parasitic disease afflicting ~200 million people worldwide and current therapy depends on a single drug (praziquantel) which exhibits several non-optimal features. These shortcomings underpin the need for next generation anthelmintics, but the process of validating physiologically relevant targets (‘target selection’) and pharmacologically profiling them is challenging. Remarkably, even though over a quarter of current human therapeutics target rhodopsin-like G protein coupled receptors (GPCRs), no library screen of a flatworm GPCR has yet been reported. Here, we have pharmacologically profiled a schistosome serotonergic GPCR (Sm.5HTR) implicated as a downstream modulator of PZQ efficacy, in a miniaturized screening assay compatible with high content screening. This approach employs a split luciferase based biosensor sensitive to cellular cAMP levels that resolves the proximal kinetics of GPCR modulation in intact cells. Data evidence a divergent pharmacological signature between the parasitic serotonergic receptor and the closest human GPCR homolog (Hs.5HTR7), supporting the feasibility of optimizing parasitic selective pharmacophores. New ligands, and chemical series, with potency and selectivity for Sm.5HTR over Hs.5HTR7 are identified in vitro and validated for in vivo efficacy against schistosomules and adult worms. Sm.5HTR also displayed a property resembling irreversible inactivation, a phenomenon discovered at Hs.5HTR7, which enhances the appeal of this abundantly expressed parasite GPCR as a target for anthelmintic ligand design. Overall, these data underscore the feasibility of profiling flatworm GPCRs in a high throughput screening format competent to resolve different classes of GPCR modulators. Further, these data underscore the promise of Sm.5HTR as a chemotherapeutically vulnerable node for development of next generation anthelmintics. PMID:27187180
Optimizing The Number Of Steps In Learning Tasks For Complex Skills
ERIC Educational Resources Information Center
Nadolski, Rob J.; Kirschner, Paul A.; van Merrienboer, Jeroen J.G.
2005-01-01
Background: Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. Aim: The aim of the study is to investigate the relation between the number of…
Experimental Study of Split-Path Transmission Load Sharing
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.; Delgado, Irebert R.
1996-01-01
Split-path transmissions are promising, attractive alternatives to the common planetary transmissions for helicopters. The split-path design offers two parallel paths for transmitting torque from the engine to the rotor. Ideally, the transmitted torque is shared equally between the two load paths; however, because of manufacturing tolerances, the design must be sized to allow for other than equal load sharing. To study the effect of tolerances, experiments were conducted using the NASA split-path test gearbox. Two gearboxes, nominally identical except for manufacturing tolerances, were tested. The clocking angle was considered to be a design parameter and used to adjust the load sharing of an otherwise fixed design. The torque carried in each path was measured for a matrix of input torques and clocking angles. The data were used to determine the optimal value and a tolerance for the clocking angles such that the most heavily loaded split path carried no greater than 53 percent of an input shaft torque of 367 N-m. The range of clocking angles satisfying this condition was -0.0012 +/- 0.0007 rad for box 1 and -0.0023 +/- 0.0009 rad for box 2. This study indicates that split-path gearboxes can be used successfully in rotorcraft and can be manufactured with existing technology.
Seco, J; Clark, C H; Evans, P M; Webb, S
2006-05-01
This study focuses on understanding the impact of intensity-modulated radiotherapy (IMRT) delivery effects when applied to plans generated by commercial treatment-planning systems such as Pinnacle (ADAC Laboratories Inc.) and CadPlan/Helios (Varian Medical Systems). These commercial planning systems have had several version upgrades (with improvements in the optimization algorithm), but the IMRT delivery effects have not been incorporated into the optimization process. IMRT delivery effects include head-scatter fluence from IMRT fields, transmission through leaves and the effect of the rounded shape of the leaf ends. They are usually accounted for after optimization when leaf sequencing the "optimal" fluence profiles, to derive the delivered fluence profile. The study was divided into two main parts: (a) analysing the dose distribution within the planning-target volume (PTV), produced by each of the commercial treatment-planning systems, after the delivered fluence had been renormalized to deliver the correct dose to the PTV; and (b) studying the impact of the IMRT delivery technique on the surrounding critical organs such as the spinal cord, lungs, rectum, bladder etc. The study was performed for tumours of (i) the oesophagus and (ii) the prostate and pelvic nodes. An oesophagus case was planned with the Pinnacle planning system for IMRT delivery, via multiple-static fields (MSF) and compensators, using the Elekta SL25 with a multileaf collimator (MLC) component. A prostate and pelvic nodes IMRT plan was performed with the Cadplan/Helios system for a dynamic delivery (DMLC) using the Varian 120-leaf Millennium MLC. In these commercial planning systems, since IMRT delivery effects are not included into the optimization process, fluence renormalization is required such that the median delivered PTV dose equals the initial prescribed PTV dose. In preparing the optimum fluence profile for delivery, the PTV dose has been "smeared" by the IMRT delivery techniques. In the case of the oesophagus, the critical organ, spinal cord, received a greater dose than initially planned, due to the delivery effects. The increase in the spinal cord dose is of the order of 2-3 Gy. In the case of the prostate and pelvic nodes, the IMRT delivery effects led to an increase of approximately 2 Gy in the dose delivered to the secondary PTV, the pelvic nodes. In addition to this, the small bowel, rectum and bladder received an increased dose of the order of 2-3 Gy to 50% of their total volume. IMRT delivery techniques strongly influence the delivered dose distributions for the oesophagus and prostate/pelvic nodes tumour sites and these effects are not yet accounted for in the Pinnacle and the CadPlan/Helios planning systems. Currently, they must be taken into account during the optimization stage by altering the dose limits accepted during optimization so that the final (sequenced) dose is within the constraints.
A graph decomposition-based approach for water distribution network optimization
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.
2013-04-01
A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
Chen, Ying-ping; Chen, Chao-Hong
2010-01-01
An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores
Kim, Youngmin; Lee, Chan-Gun
2017-01-01
In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung
Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined inmore » a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.« less
Le, Duc Van; Oh, Hoon; Yoon, Seokhoon
2013-07-05
In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.
Van Le, Duc; Oh, Hoon; Yoon, Seokhoon
2013-01-01
In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134
Atallah, I; Milet, C; Quatre, R; Henry, M; Reyt, E; Coll, J-L; Hurbin, A; Righini, C A
2015-12-01
To study the role of near-infrared fluorescence imaging in the detection and resection of metastatic cervical lymph nodes in head and neck cancer. CAL33 head and neck cancer cells of human origin were implanted in the oral cavity of nude mice. The mice were followed up after tumor resection to detect the development of lymph node metastases. A specific fluorescent tracer for αvβ3 integrin expressed by CAL33 cells was injected intravenously in the surviving mice between the second and the fourth month following tumor resection. A near-infrared fluorescence-imaging camera was used to detect tracer uptake in metastatic cervical lymph nodes, to guide of lymph-node resection for histological analysis. Lymph node metastases were observed in 42.8% of surviving mice between the second and the fourth month following orthotopic tumor resection. Near-infrared fluorescence imaging provided real-time intraoperative detection of clinical and subclinical lymph node metastases. These results were confirmed histologically. Near infrared fluorescence imaging provides real-time contrast between normal and malignant tissue, allowing intraoperative detection of metastatic lymph nodes. This preclinical stage is essential before testing the technique in humans. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee
2014-02-01
Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.
Use of multi-node wells in the Groundwater-Management Process of MODFLOW-2005 (GWM-2005)
Ahlfeld, David P.; Barlow, Paul M.
2013-01-01
Many groundwater wells are open to multiple aquifers or to multiple intervals within a single aquifer. These types of wells can be represented in numerical simulations of groundwater flow by use of the Multi-Node Well (MNW) Packages developed for the U.S. Geological Survey’s MODFLOW model. However, previous versions of the Groundwater-Management (GWM) Process for MODFLOW did not allow the use of multi-node wells in groundwater-management formulations. This report describes modifications to the MODFLOW–2005 version of the GWM Process (GWM–2005) to provide for such use with the MNW2 Package. Multi-node wells can be incorporated into a management formulation as flow-rate decision variables for which optimal withdrawal or injection rates will be determined as part of the GWM–2005 solution process. In addition, the heads within multi-node wells can be used as head-type state variables, and, in that capacity, be included in the objective function or constraint set of a management formulation. Simple head bounds also can be defined to constrain water levels at multi-node wells. The report provides instructions for including multi-node wells in the GWM–2005 data-input files and a sample problem that demonstrates use of multi-node wells in a typical groundwater-management problem.
Barall, Michael
2009-01-01
We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks
Jiang, Peng; Xu, Yiming; Liu, Jun
2017-01-01
For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837
Moussawi, A; Derzsy, N; Lin, X; Szymanski, B K; Korniss, G
2017-09-15
Cascading failures are a critical vulnerability of complex information or infrastructure networks. Here we investigate the properties of load-based cascading failures in real and synthetic spatially-embedded network structures, and propose mitigation strategies to reduce the severity of damages caused by such failures. We introduce a stochastic method for optimal heterogeneous distribution of resources (node capacities) subject to a fixed total cost. Additionally, we design and compare the performance of networks with N-stable and (N-1)-stable network-capacity allocations by triggering cascades using various real-world node-attack and node-failure scenarios. We show that failure mitigation through increased node protection can be effectively achieved against single-node failures. However, mitigating against multiple node failures is much more difficult due to the combinatorial increase in possible sets of initially failing nodes. We analyze the robustness of the system with increasing protection, and find that a critical tolerance exists at which the system undergoes a phase transition, and above which the network almost completely survives an attack. Moreover, we show that cascade-size distributions measured in this region exhibit a power-law decay. Finally, we find a strong correlation between cascade sizes induced by individual nodes and sets of nodes. We also show that network topology alone is a weak predictor in determining the progression of cascading failures.
Mao, X W; Yang, J Y; Zheng, X X; Wang, L; Zhu, L; Li, Y; Xiong, H K; Sun, J Y
2017-06-12
Objective: To compare the clinical value of two quantitative methods in analyzing endobronchial ultrasound real-time elastography (EBUS-RTE) images for evaluating intrathoracic lymph nodes. Methods: From January 2014 to April 2014, EBUS-RTE examination was performed in patients who received EBUS-TBNA examination in Shanghai Chest Hospital. Each intrathoracic lymph node had a selected EBUS-RTE image. Stiff area ratio and mean hue value of region of interest (ROI) in each image were calculated respectively. The final diagnosis of lymph node was based on the pathologic/microbiologic results of EBUS-TBNA, pathologic/microbiologic results of other examinations and clinical following-up. The sensitivity, specificity, positive predictive value, negative predictive value and accuracy were evaluated for distinguishing malignant and benign lesions. Results: Fifty-six patients and 68 lymph nodes were enrolled in this study, of which 35 lymph nodes were malignant and 33 lymph nodes were benign. The stiff area ratio and mean hue value of benign and malignant lesions were 0.32±0.29, 0.62±0.20 and 109.99±28.13, 141.62±17.52, respectively, and statistical differences were found in both of those two methods ( t =-5.14, P <0.01; t =-5.53, P <0.01). The area under curves was 0.813, 0.814 in stiff area ratio and mean hue value, respectively. The optimal diagnostic cut-off value of stiff area ratio was 0.48, and the sensitivity, specificity, positive predictive value, negative predictive value and accuracy were 82.86%, 81.82%, 82.86%, 81.82% and 82.35%, respectively. The optimal diagnostic cut-off value of mean hue value was 126.28, and the sensitivity, specificity, positive predictive value, negative predictive value and accuracy were 85.71%, 75.76%, 78.95%, 83.33% and 80.88%, respectively. Conclusion: Both the stiff area ratio and mean hue value methods can be used for analyzing EBUS-RTE images quantitatively, having the value of differentiating benign and malignant intrathoracic lymph nodes, and the stiff area ratio is better than the mean hue value between the two methods.
NASA Technical Reports Server (NTRS)
Wolpert, David
2004-01-01
Masked proportional routing is an improved procedure for choosing links between adjacent nodes of a network for the purpose of transporting an entity from a source node ("A") to a destination node ("B"). The entity could be, for example, a physical object to be shipped, in which case the nodes would represent waypoints and the links would represent roads or other paths between waypoints. For another example, the entity could be a message or packet of data to be transmitted from A to B, in which case the nodes could be computer-controlled switching stations and the links could be communication channels between the stations. In yet another example, an entity could represent a workpiece while links and nodes could represent, respectively, manufacturing processes and stages in the progress of the workpiece towards a finished product. More generally, the nodes could represent states of an entity and the links could represent allowed transitions of the entity. The purpose of masked proportional routing and of related prior routing procedures is to schedule transitions of entities from their initial states ("A") to their final states ("B") in such a manner as to minimize a cost or to attain some other measure of optimality or efficiency. Masked proportional routing follows a distributed (in the sense of decentralized) approach to probabilistically or deterministically choosing the links. It was developed to satisfy a need for a routing procedure that 1. Does not always choose the same link(s), even for two instances characterized by identical estimated values of associated cost functions; 2. Enables a graceful transition from one set of links to another set of links as the circumstances of operation of the network change over time; 3. Is preferably amenable to separate optimization of different portions of the network; 4. Is preferably usable in a network in which some of the routing decisions are made by one or more other procedure(s); 5. Preferably does not cause an entity to visit the same node twice; and 6. Preferably can be modified so that separate entities moving from A to B do not arrive out of order.
Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks
Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang
2018-01-01
In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches. PMID:29439439
Bi-tangential hybrid IMRT for sparing the shoulder in whole breast irradiation.
Farace, P; Deidda, M A; Iamundo de Cumis, I; Iamundo de Curtis, I; Deiana, E; Farigu, R; Lay, G; Porru, S
2013-11-01
A bi-tangential technique is proposed to reduce undesired doses to the shoulder produced by standard tangential irradiation. A total of 6 patients affected by shoulder pain and reduced functional capacity after whole-breast irradiation were retrospectively analysed. The standard tangential plan used for treatment was compared with (1) a single bi-tangential plan where, to spare the shoulder, the lateral open tangent was split into two half-beams at isocentre, with the superior portion rotated by 10-20° medially with respect to the standard lateral beam; (2) a double bi-tangential plan, where both the tangential open beams were split. The planning target volume (PTV) coverage and the dose to the portion of muscles and axilla included in the standard tangential beams were compared. PTV95 % of standard plan (91.9 ± 3.8) was not significantly different from single bi-tangential plan (91.8 ± 3.4); a small but significant (p < 0.01) decrease was observed with the double bi-tangential plan (90.1 ± 3.7). A marked dose reduction to the muscle was produced by the single bi-tangential plan around 30-40 Gy. The application of the double bi-tangential technique further reduced the volume receiving around 20 Gy, but did not markedly affect the higher doses. The dose to the axilla was reduced both in the single and the double bi-tangential plans. The single bi-tangential technique would have been able to reduce the dose to shoulder and axilla, without compromising target coverage. This simple technique is valuable for irradiation after axillary lymph node dissection or in patients without dissection due to negative or low-volume sentinel lymph node disease.
Efficacy of antidepressive medication for depression in Parkinson disease: a network meta-analysis
Zhuo, Chuanjun; Xue, Rong; Luo, Lanlan; Ji, Feng; Tian, Hongjun; Qu, Hongru; Lin, Xiaodong; Jiang, Ronghuan; Tao, Ran
2017-01-01
Abstract Background: Parkinson disease (PD) was considered as the 2nd most prevalent neurodegenerative disorder after Alzheimer disease, while depression is a prevailing nonmotor symptom of PD. Typically used antidepression medication includes tricyclic antidepressants (TCA), selective serotonin reuptake inhibitors (SSRI), serotonin and norepinephrine reuptake inhibitors (SNRI), monoamine-oxidase inhibitors (MAOI), and dopamine agonists (DA). Our study aimed at evaluating the efficacy of antidepressive medications for depression of PD. Methods: Web of Science, PubMed, Embase, and the Cochrane library were searched for related articles. Traditional meta-analysis and network meta-analysis (NMA) were performed with outcomes including depression score, UPDRS-II, UPDRS-III, and adverse effects. Surface under the cumulative ranking curve (SUCRA) was also performed to illustrate the rank probabilities of different medications on various outcomes. The consistency of direct and indirect evidence was also assessed by node-splitting method. Results: Results of traditional pairwise meta-analysis were performed. Concerning depression score, significant improvement was observed in AD, MAOI, SSRI, and SNRI compared with placebo. NMA was performed and more information could be obtained. DA was illustrated to be effective over placebo concerning UPDRS-III, MAOI, and SNRI. DA demonstrated a better prognosis in UPDRS-II scores compared with placebo and MAOI. However, DA and SSRI demonstrated a significant increase in adverse effects compared with placebo. The SUCRA value was calculated to evaluate the ranking probabilities of all medications on investigated outcomes, and the consistency between direct and indirect evidences was assessed by node-splitting method. Conclusion: SSRI had a satisfying efficacy for the depression of PD patients and could improve activities of daily living and motor function of patient but the adverse effects are unneglectable. SNRI are the safest medication with high efficacy for depression as well while other outcomes are relatively poor. PMID:28562526
Solar water splitting by photovoltaic-electrolysis with a solar-to-hydrogen efficiency over 30%
Jia, Jieyang; Seitz, Linsey C.; Benck, Jesse D.; Huo, Yijie; Chen, Yusi; Ng, Jia Wei Desmond; Bilir, Taner; Harris, James S.; Jaramillo, Thomas F.
2016-01-01
Hydrogen production via electrochemical water splitting is a promising approach for storing solar energy. For this technology to be economically competitive, it is critical to develop water splitting systems with high solar-to-hydrogen (STH) efficiencies. Here we report a photovoltaic-electrolysis system with the highest STH efficiency for any water splitting technology to date, to the best of our knowledge. Our system consists of two polymer electrolyte membrane electrolysers in series with one InGaP/GaAs/GaInNAsSb triple-junction solar cell, which produces a large-enough voltage to drive both electrolysers with no additional energy input. The solar concentration is adjusted such that the maximum power point of the photovoltaic is well matched to the operating capacity of the electrolysers to optimize the system efficiency. The system achieves a 48-h average STH efficiency of 30%. These results demonstrate the potential of photovoltaic-electrolysis systems for cost-effective solar energy storage. PMID:27796309
Fu, Zhuo; Wang, Jiangtao
2018-01-01
In order to promote the development of low-carbon logistics and economize logistics distribution costs, the vehicle routing problem with split deliveries by backpack is studied. With the help of the model of classical capacitated vehicle routing problem, in this study, a form of discrete split deliveries was designed in which the customer demand can be split only by backpack. A double-objective mathematical model and the corresponding adaptive tabu search (TS) algorithm were constructed for solving this problem. By embedding the adaptive penalty mechanism, and adopting the random neighborhood selection strategy and reinitialization principle, the global optimization ability of the new algorithm was enhanced. Comparisons with the results in the literature show the effectiveness of the proposed algorithm. The proposed method can save the costs of low-carbon logistics and reduce carbon emissions, which is conducive to the sustainable development of low-carbon logistics. PMID:29747469
Solar water splitting by photovoltaic-electrolysis with a solar-to-hydrogen efficiency over 30.
Jia, Jieyang; Seitz, Linsey C; Benck, Jesse D; Huo, Yijie; Chen, Yusi; Ng, Jia Wei Desmond; Bilir, Taner; Harris, James S; Jaramillo, Thomas F
2016-10-31
Hydrogen production via electrochemical water splitting is a promising approach for storing solar energy. For this technology to be economically competitive, it is critical to develop water splitting systems with high solar-to-hydrogen (STH) efficiencies. Here we report a photovoltaic-electrolysis system with the highest STH efficiency for any water splitting technology to date, to the best of our knowledge. Our system consists of two polymer electrolyte membrane electrolysers in series with one InGaP/GaAs/GaInNAsSb triple-junction solar cell, which produces a large-enough voltage to drive both electrolysers with no additional energy input. The solar concentration is adjusted such that the maximum power point of the photovoltaic is well matched to the operating capacity of the electrolysers to optimize the system efficiency. The system achieves a 48-h average STH efficiency of 30%. These results demonstrate the potential of photovoltaic-electrolysis systems for cost-effective solar energy storage.
Design, selection, and characterization of a split chorismate mutase
Müller, Manuel M; Kries, Hajo; Csuhai, Eva; Kast, Peter; Hilvert, Donald
2010-01-01
Split proteins are versatile tools for detecting protein–protein interactions and studying protein folding. Here, we report a new, particularly small split enzyme, engineered from a thermostable chorismate mutase (CM). Upon dissecting the helical-bundle CM from Methanococcus jannaschii into a short N-terminal helix and a 3-helix segment and attaching an antiparallel leucine zipper dimerization domain to the individual fragments, we obtained a weakly active heterodimeric mutase. Using combinatorial mutagenesis and in vivo selection, we optimized the short linker sequences connecting the leucine zipper to the enzyme domain. One of the selected CMs was characterized in detail. It spontaneously assembles from the separately inactive fragments and exhibits wild-type like CM activity. Owing to the availability of a well characterized selection system, the simple 4-helix bundle topology, and the small size of the N-terminal helix, the heterodimeric CM could be a valuable scaffold for enzyme engineering efforts and as a split sensor for specifically oriented protein–protein interactions. PMID:20306491
MHD Code Optimizations and Jets in Dense Gaseous Halos
NASA Astrophysics Data System (ADS)
Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max
We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.
Hybrid optimal online-overnight charging coordination of plug-in electric vehicles in smart grid
NASA Astrophysics Data System (ADS)
Masoum, Mohammad A. S.; Nabavi, Seyed M. H.
2016-10-01
Optimal coordinated charging of plugged-in electric vehicles (PEVs) in smart grid (SG) can be beneficial for both consumers and utilities. This paper proposes a hybrid optimal online followed by overnight charging coordination of high and low priority PEVs using discrete particle swarm optimization (DPSO) that considers the benefits of both consumers and electric utilities. Objective functions are online minimization of total cost (associated with grid losses and energy generation) and overnight valley filling through minimization of the total load levels. The constraints include substation transformer loading, node voltage regulations and the requested final battery state of charge levels (SOCreq). The main challenge is optimal selection of the overnight starting time (toptimal-overnight,start) to guarantee charging of all vehicle batteries to the SOCreq levels before the requested plug-out times (treq) which is done by simultaneously solving the online and overnight objective functions. The online-overnight PEV coordination approach is implemented on a 449-node SG; results are compared for uncoordinated and coordinated battery charging as well as a modified strategy using cost minimizations for both online and overnight coordination. The impact of toptimal-overnight,start on performance of the proposed PEV coordination is investigated.
Monitoring Churn in Wireless Networks
NASA Astrophysics Data System (ADS)
Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger
Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.
Dynamic Trust Management for Delay Tolerant Networks and Its Application to Secure Routing
2012-09-28
population of misbehaving nodes or evolving hostility or social relations such that an application (e.g., secure routing) built on top of trust...optimization in DTNs in response to dynamically changing conditions such as increasing population of misbehaving nodes. The design part addresses the...The rest of the paper is organized as follows. In Section 2, we survey existing trust management protocols and approaches to deal with misbehaved
NASA Astrophysics Data System (ADS)
Tsai, Yi-Pei; Hsieh, Ting-Huan; Lin, Chrong Jung; King, Ya-Chin
2017-09-01
A novel device for monitoring plasma-induced damage in the back-end-of-line (BEOL) process with charge splitting capability is first-time proposed and demonstrated. This novel charge splitting in situ recorder (CSIR) can independently trace the amount and polarity of plasma charging effects during the manufacturing process of advanced fin field-effect transistor (FinFET) circuits. Not only does it reveal the real-time and in situ plasma charging levels on the antennas, but it also separates positive and negative charging effect and provides two independent readings. As CMOS technologies push for finer metal lines in the future, the new charge separation scheme provides a powerful tool for BEOL process optimization and further device reliability improvements.
Optical splitter design for telecommunication access networks with triple-play services
NASA Astrophysics Data System (ADS)
Agalliu, Rajdi; Burtscher, Catalina; Lucki, Michal; Seyringer, Dana
2018-01-01
In this paper, we present various designs of optical splitters for access networks, such as GPON and XG-PON by ITU-T with triple-play services (ie data, voice and video). The presented designs exhibit a step forward, compared to the solutions recommended by the ITU, in terms of performance in transmission systems using WDM. The quality of performance is represented by the bit error rate and the Q-factor. Besides the standard splitter design, we propose a new length-optimized splitter design with a smaller waveguide core, providing some reduction of non-uniformity of the power split between the output waveguides. The achieved splitting parameters are incorporated in the simulations of passive optical networks. For this purpose, the OptSim tool employing Time Domain Split Step method was used.
Spatial Search by Quantum Walk is Optimal for Almost all Graphs.
Chakraborty, Shantanav; Novo, Leonardo; Ambainis, Andris; Omar, Yasser
2016-03-11
The problem of finding a marked node in a graph can be solved by the spatial search algorithm based on continuous-time quantum walks (CTQW). However, this algorithm is known to run in optimal time only for a handful of graphs. In this work, we prove that for Erdös-Renyi random graphs, i.e., graphs of n vertices where each edge exists with probability p, search by CTQW is almost surely optimal as long as p≥log^{3/2}(n)/n. Consequently, we show that quantum spatial search is in fact optimal for almost all graphs, meaning that the fraction of graphs of n vertices for which this optimality holds tends to one in the asymptotic limit. We obtain this result by proving that search is optimal on graphs where the ratio between the second largest and the largest eigenvalue is bounded by a constant smaller than 1. Finally, we show that we can extend our results on search to establish high fidelity quantum communication between two arbitrary nodes of a random network of interacting qubits, namely, to perform quantum state transfer, as well as entanglement generation. Our work shows that quantum information tasks typically designed for structured systems retain performance in very disordered structures.
Araki, Koji; Mizokami, Daisuke; Tomifuji, Masayuki; Yamashita, Taku; Ohnuki, Kazunobu; Umeda, Izumi O; Fujii, Hirofumi; Kosuda, Shigeru; Shiotani, Akihiro
2014-08-01
Sentinel node navigation surgery using real-time, near-infrared imaging with indocyanine green is becoming popular by allowing head and neck surgeons to avoid unnecessary neck dissection. The major drawback of this method is its quick migration through the lymphatics, limiting the diagnostic time window and undesirable detection of downstream nodes. We resolved this problem by mixing indocyanine green (ICG) with phytate colloid to retard its migration and demonstrated its feasibility in a nude mouse study. Experimental prospective animal study. Animal laboratory. Indocyanine green at 3 concentrations was tested to determine the optimal concentration for sentinel lymph node detection in a mouse model. Effect of indocyanine green with phytate colloid mixture solutions was also analyzed. Indocyanine green or mixture solution at different mixing ratios were injected into the tongue of nude mice and near-infrared fluorescence images were captured sequentially for up to 48 hours. The brightness of fluorescence in the sentinel lymph node and lymph nodes further downstream were assessed. Indocyanine green concentration >50 μg/mL did not improve sentinel lymph node detection. The addition of phytate colloid to indocyanine green extended the period when sentinel lymph node was detectable. Second echelon lymph nodes were not imaged in mice injected with the mixture, while these were visualized in mice injected with indocyanine green alone. This novel technique of ICG-phytate colloid mixture allows prolonged diagnostic time window, prevention of downstream subsequent nodes detection, and improved accuracy for the detection of true sentinel lymph nodes. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Fracture characterization from near-offset VSP inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, S.; MacBeth, C.; Queen, J.
1997-01-01
A global optimization method incorporating a ray-tracing scheme is used to invert observations of shear-wave splitting from two near-offset VSPs recorded at the Conoco Borehole Test Facility, Kay County, Oklahoma. Inversion results suggest that the seismic anisotropy is due to a non-vertical fracture system. This interpretation is constrained by the VSP acquisition geometry for which two sources are employed along near diametrically opposite azimuths about the well heads. A correlation is noted between the time-delay variations between the fast and slow split shear waves and the sandstone formations.
Shrimankar, D D; Sathe, S R
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.
Design and performance of an integrated ground and space sensor web for monitoring active volcanoes.
NASA Astrophysics Data System (ADS)
Lahusen, Richard; Song, Wenzhan; Kedar, Sharon; Shirazi, Behrooz; Chien, Steve; Doubleday, Joshua; Davies, Ashley; Webb, Frank; Dzurisin, Dan; Pallister, John
2010-05-01
An interdisciplinary team of computer, earth and space scientists collaborated to develop a sensor web system for rapid deployment at active volcanoes. The primary goals of this Optimized Autonomous Space In situ Sensorweb (OASIS) are to: 1) integrate complementary space and in situ (ground-based) elements into an interactive, autonomous sensor web; 2) advance sensor web power and communication resource management technology; and 3) enable scalability for seamless addition sensors and other satellites into the sensor web. This three-year project began with a rigorous multidisciplinary interchange that resulted in definition of system requirements to guide the design of the OASIS network and to achieve the stated project goals. Based on those guidelines, we have developed fully self-contained in situ nodes that integrate GPS, seismic, infrasonic and lightning (ash) detection sensors. The nodes in the wireless sensor network are linked to the ground control center through a mesh network that is highly optimized for remote geophysical monitoring. OASIS also features an autonomous bidirectional interaction between ground nodes and instruments on the EO-1 space platform through continuous analysis and messaging capabilities at the command and control center. Data from both the in situ sensors and satellite-borne hyperspectral imaging sensors stream into a common database for real-time visualization and analysis by earth scientists. We have successfully completed a field deployment of 15 nodes within the crater and on the flanks of Mount St. Helens, Washington. The demonstration that sensor web technology facilitates rapid network deployments and that we can achieve real-time continuous data acquisition. We are now optimizing component performance and improving user interaction for additional deployments at erupting volcanoes in 2010.
Shrimankar, D. D.; Sathe, S. R.
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868
Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks
NASA Astrophysics Data System (ADS)
Huang, Lei; Pan, Feng
2007-09-01
In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
Optimizing Hybrid Spreading in Metapopulations
Zhang, Changwang; Zhou, Shi; Miller, Joel C.; Cox, Ingemar J.; Chain, Benjamin M.
2015-01-01
Epidemic spreading phenomena are ubiquitous in nature and society. Examples include the spreading of diseases, information, and computer viruses. Epidemics can spread by local spreading, where infected nodes can only infect a limited set of direct target nodes and global spreading, where an infected node can infect every other node. In reality, many epidemics spread using a hybrid mixture of both types of spreading. In this study we develop a theoretical framework for studying hybrid epidemics, and examine the optimum balance between spreading mechanisms in terms of achieving the maximum outbreak size. We show the existence of critically hybrid epidemics where neither spreading mechanism alone can cause a noticeable spread but a combination of the two spreading mechanisms would produce an enormous outbreak. Our results provide new strategies for maximising beneficial epidemics and estimating the worst outcome of damaging hybrid epidemics. PMID:25923411
Optimizing hybrid spreading in metapopulations.
Zhang, Changwang; Zhou, Shi; Miller, Joel C; Cox, Ingemar J; Chain, Benjamin M
2015-04-29
Epidemic spreading phenomena are ubiquitous in nature and society. Examples include the spreading of diseases, information, and computer viruses. Epidemics can spread by local spreading, where infected nodes can only infect a limited set of direct target nodes and global spreading, where an infected node can infect every other node. In reality, many epidemics spread using a hybrid mixture of both types of spreading. In this study we develop a theoretical framework for studying hybrid epidemics, and examine the optimum balance between spreading mechanisms in terms of achieving the maximum outbreak size. We show the existence of critically hybrid epidemics where neither spreading mechanism alone can cause a noticeable spread but a combination of the two spreading mechanisms would produce an enormous outbreak. Our results provide new strategies for maximising beneficial epidemics and estimating the worst outcome of damaging hybrid epidemics.
ZeroCal: Automatic MAC Protocol Calibration
NASA Astrophysics Data System (ADS)
Meier, Andreas; Woehrle, Matthias; Zimmerling, Marco; Thiele, Lothar
Sensor network MAC protocols are typically configured for an intended deployment scenario once and for all at compile time. This approach, however, leads to suboptimal performance if the network conditions deviate from the expectations. We present ZeroCal, a distributed algorithm that allows nodes to dynamically adapt to variations in traffic volume. Using ZeroCal, each node autonomously configures its MAC protocol at runtime, thereby trying to reduce the maximum energy consumption among all nodes. While the algorithm is readily usable for any asynchronous low-power listening or low-power probing protocol, we validate and demonstrate the effectiveness of ZeroCal on X-MAC. Extensive testbed experiments and simulations indicate that ZeroCal quickly adapts to traffic variations. We further show that ZeroCal extends network lifetime by 50% compared to an optimal configuration with identical and static MAC parameters at all nodes.
Ramírez-Backhaus, Miguel; Mira Moreno, Alejandra; Gómez Ferrer, Alvaro; Calatrava Fons, Ana; Casanova, Juan; Solsona Narbón, Eduardo; Ortiz Rodríguez, Isabel María; Rubio Briones, José
2016-11-01
We evaluated the effectiveness of indocyanine green guided pelvic lymph node dissection for the optimal staging of prostate cancer and analyzed whether the technique could replace extended pelvic lymph node dissection. A solution of 25 mg indocyanine green in 5 ml sterile water was transperineally injected. Pelvic lymph node dissection was started with the indocyanine green stained nodes followed by extended pelvic lymph node dissection. Primary outcome measures were sensitivity, specificity, predictive value and likelihood ratio of a negative test of indocyanine green guided pelvic lymph node dissection. A total of 84 patients with a median age of 63.55 years and a median prostate specific antigen of 8.48 ng/ml were included in the study. Of these patients 60.7% had intermediate risk disease and 25% had high or very high risk disease. A median of 7 indocyanine green stained nodes per patient was detected (range 2 to 18) with a median of 22 nodes excised during extended pelvic lymph node dissection. Lymph node metastasis was identified in 25 patients, 23 of whom had disease properly classified by indocyanine green guided pelvic lymph node dissection. The most frequent location of indocyanine green stained nodes was the proximal internal iliac artery followed by the fossa of Marcille. The negative predictive value was 96.7% and the likelihood ratio of a negative test was 8%. Overall 1,856 nodes were removed and 603 were stained indocyanine green. Pathological examination revealed 82 metastatic nodes, of which 60% were indocyanine green stained. The negative predictive value was 97.4% but the likelihood ratio of a negative test was 58.5%. Indocyanine green guided pelvic lymph node dissection correctly staged 97% of cases. However, according to our data it cannot replace extended pelvic lymph node dissection. Nevertheless, its high negative predictive value could allow us to avoid extended pelvic lymph node dissection if we had an accurate intraoperative lymph fluorescent analysis. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Junseok; Rhyou, Chanryeol; Kang, Byungjun; Lee, Hyungsuk
2017-04-01
This paper describes continuously phase-modulated standing surface acoustic waves (CPM-SSAW) and its application for particle separation in multiple pressure nodes. A linear change of phase in CPM-SSAW applies a force to particles whose magnitude depends on their size and contrast factors. During continuous phase modulation, we demonstrate that particles with a target dimension are translated in the direction of moving pressure nodes, whereas smaller particles show oscillatory movements. The rate of phase modulation is optimized for separation of target particles from the relationship between mean particle velocity and period of oscillation. The developed technique is applied to separate particles of a target dimension from the particle mixture. Furthermore, we also demonstrate human keratinocyte cells can be separated in the cell and bead mixture. The separation technique is incorporated with a microfluidic channel spanning multiple pressure nodes, which is advantageous over separation in a single pressure node in terms of throughput.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Laura E.G.; Punglia, Rinaa S.; Wong, Julia S.
2014-11-15
Radiation therapy to the breast following breast conservation surgery has been the standard of care since randomized trials demonstrated equivalent survival compared to mastectomy and improved local control and survival compared to breast conservation surgery alone. Recent controversies regarding adjuvant radiation therapy have included the potential role of additional radiation to the regional lymph nodes. This review summarizes the evolution of regional nodal management focusing on 2 topics: first, the changing paradigm with regard to surgical evaluation of the axilla; second, the role for regional lymph node irradiation and optimal design of treatment fields. Contemporary data reaffirm prior studies showingmore » that complete axillary dissection may not provide additional benefit relative to sentinel lymph node biopsy in select patient populations. Preliminary data also suggest that directed nodal radiation therapy to the supraclavicular and internal mammary lymph nodes may prove beneficial; publication of several studies are awaited to confirm these results and to help define subgroups with the greatest likelihood of benefit.« less
A novel complex networks clustering algorithm based on the core influence of nodes.
Tong, Chao; Niu, Jianwei; Dai, Bin; Xie, Zhongyu
2014-01-01
In complex networks, cluster structure, identified by the heterogeneity of nodes, has become a common and important topological property. Network clustering methods are thus significant for the study of complex networks. Currently, many typical clustering algorithms have some weakness like inaccuracy and slow convergence. In this paper, we propose a clustering algorithm by calculating the core influence of nodes. The clustering process is a simulation of the process of cluster formation in sociology. The algorithm detects the nodes with core influence through their betweenness centrality, and builds the cluster's core structure by discriminant functions. Next, the algorithm gets the final cluster structure after clustering the rest of the nodes in the network by optimizing method. Experiments on different datasets show that the clustering accuracy of this algorithm is superior to the classical clustering algorithm (Fast-Newman algorithm). It clusters faster and plays a positive role in revealing the real cluster structure of complex networks precisely.
Assessing Risk-Based Policies for Pretrial Release and Split Sentencing in Los Angeles County Jails
Usta, Mericcan; Wein, Lawrence M.
2015-01-01
Court-mandated downsizing of the CA prison system has led to a redistribution of detainees from prisons to CA county jails, and subsequent jail overcrowding. Using data that is representative of the LA County jail system, we build a mathematical model that tracks the flow of individuals during arraignment, pretrial release or detention, case disposition, jail sentence, and possible recidivism during pretrial release, after a failure to appear in court, during non-felony probation and during felony supervision. We assess 64 joint pretrial release and split-sentencing (where low-level felon sentences are split between jail time and mandatory supervision) policies that are based on the type of charge (felony or non-felony) and the risk category as determined by the CA Static Risk Assessment tool, and compare their performance to that of the policy LA County used in early 2014, before split sentencing was in use. In our model, policies that offer split sentences to all low-level felons optimize the key tradeoff between public safety and jail congestion by, e.g., simultaneously reducing the rearrest rate by 7% and the mean jail population by 20% relative to the policy LA County used in 2014. The effectiveness of split sentencing is due to two facts: (i) convicted felony offenders comprised ≈ 45% of LA County’s jail population in 2014, and (ii) compared to pretrial release, split sentencing exposes offenders to much less time under recidivism risk per saved jail day. PMID:26714283
Assessing Risk-Based Policies for Pretrial Release and Split Sentencing in Los Angeles County Jails.
Usta, Mericcan; Wein, Lawrence M
2015-01-01
Court-mandated downsizing of the CA prison system has led to a redistribution of detainees from prisons to CA county jails, and subsequent jail overcrowding. Using data that is representative of the LA County jail system, we build a mathematical model that tracks the flow of individuals during arraignment, pretrial release or detention, case disposition, jail sentence, and possible recidivism during pretrial release, after a failure to appear in court, during non-felony probation and during felony supervision. We assess 64 joint pretrial release and split-sentencing (where low-level felon sentences are split between jail time and mandatory supervision) policies that are based on the type of charge (felony or non-felony) and the risk category as determined by the CA Static Risk Assessment tool, and compare their performance to that of the policy LA County used in early 2014, before split sentencing was in use. In our model, policies that offer split sentences to all low-level felons optimize the key tradeoff between public safety and jail congestion by, e.g., simultaneously reducing the rearrest rate by 7% and the mean jail population by 20% relative to the policy LA County used in 2014. The effectiveness of split sentencing is due to two facts: (i) convicted felony offenders comprised ≈ 45% of LA County's jail population in 2014, and (ii) compared to pretrial release, split sentencing exposes offenders to much less time under recidivism risk per saved jail day.