Sample records for optimisation senuf network

  1. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  2. A Method for Decentralised Optimisation in Networks

    NASA Astrophysics Data System (ADS)

    Saramäki, Jari

    2005-06-01

    We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.

  3. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  4. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  5. Optimality principles in the regulation of metabolic networks.

    PubMed

    Berkhout, Jan; Bruggeman, Frank J; Teusink, Bas

    2012-08-29

    One of the challenging tasks in systems biology is to understand how molecular networks give rise to emergent functionality and whether universal design principles apply to molecular networks. To achieve this, the biophysical, evolutionary and physiological constraints that act on those networks need to be identified in addition to the characterisation of the molecular components and interactions. Then, the cellular "task" of the network-its function-should be identified. A network contributes to organismal fitness through its function. The premise is that the same functions are often implemented in different organisms by the same type of network; hence, the concept of design principles. In biology, due to the strong forces of selective pressure and natural selection, network functions can often be understood as the outcome of fitness optimisation. The hypothesis of fitness optimisation to understand the design of a network has proven to be a powerful strategy. Here, we outline the use of several optimisation principles applied to biological networks, with an emphasis on metabolic regulatory networks. We discuss the different objective functions and constraints that are considered and the kind of understanding that they provide.

  6. Optimality Principles in the Regulation of Metabolic Networks

    PubMed Central

    Berkhout, Jan; Bruggeman, Frank J.; Teusink, Bas

    2012-01-01

    One of the challenging tasks in systems biology is to understand how molecular networks give rise to emergent functionality and whether universal design principles apply to molecular networks. To achieve this, the biophysical, evolutionary and physiological constraints that act on those networks need to be identified in addition to the characterisation of the molecular components and interactions. Then, the cellular “task” of the network—its function—should be identified. A network contributes to organismal fitness through its function. The premise is that the same functions are often implemented in different organisms by the same type of network; hence, the concept of design principles. In biology, due to the strong forces of selective pressure and natural selection, network functions can often be understood as the outcome of fitness optimisation. The hypothesis of fitness optimisation to understand the design of a network has proven to be a powerful strategy. Here, we outline the use of several optimisation principles applied to biological networks, with an emphasis on metabolic regulatory networks. We discuss the different objective functions and constraints that are considered and the kind of understanding that they provide. PMID:24957646

  7. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  8. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    NASA Astrophysics Data System (ADS)

    Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.

    2014-05-01

    Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.

  9. Evolving optimised decision rules for intrusion detection using particle swarm paradigm

    NASA Astrophysics Data System (ADS)

    Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.

    2012-12-01

    The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.

  10. Path optimisation of a mobile robot using an artificial neural network controller

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Parhi, D. R.

    2011-01-01

    This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.

  11. Modulation aware cluster size optimisation in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  12. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.

  13. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  14. Energy landscapes for a machine learning application to series data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Andrew J.; Stevenson, Jacob D.; Das, Ritankar

    2016-03-28

    Methods developed to explore and characterise potential energy landscapes are applied to the corresponding landscapes obtained from optimisation of a cost function in machine learning. We consider neural network predictions for the outcome of local geometry optimisation in a triatomic cluster, where four distinct local minima exist. The accuracy of the predictions is compared for fits using data from single and multiple points in the series of atomic configurations resulting from local geometry optimisation and for alternative neural networks. The machine learning solution landscapes are visualised using disconnectivity graphs, and signatures in the effective heat capacity are analysed in termsmore » of distributions of local minima and their properties.« less

  15. Optimal coordinated voltage control in active distribution networks using backtracking search algorithm

    PubMed Central

    Tengku Hashim, Tengku Juhana; Mohamed, Azah

    2017-01-01

    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919

  16. Optimal coordinated voltage control in active distribution networks using backtracking search algorithm.

    PubMed

    Tengku Hashim, Tengku Juhana; Mohamed, Azah

    2017-01-01

    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.

  17. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  18. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  19. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    PubMed

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  20. Modelling and strategy optimisation for a kind of networked evolutionary games with memories under the bankruptcy mechanism

    NASA Astrophysics Data System (ADS)

    Fu, Shihua; Li, Haitao; Zhao, Guodong

    2018-05-01

    This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.

  1. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.

    PubMed

    Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M

    2018-03-01

    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  3. Dynamic least-cost optimisation of wastewater system remedial works requirements.

    PubMed

    Vojinovic, Z; Solomatine, D; Price, R K

    2006-01-01

    In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.

  4. Optimised cross-layer synchronisation schemes for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Nasri, Nejah; Ben Fradj, Awatef; Kachouri, Abdennaceur

    2017-07-01

    This paper aims at synchronisation between the sensor nodes. Indeed, in the context of wireless sensor networks, it is necessary to take into consideration the energy cost induced by the synchronisation, which can represent the majority of the energy consumed. On communication, an already identified hard point consists in imagining a fine synchronisation protocol which must be sufficiently robust to the intermittent energy in the sensors. Hence, this paper worked on aspects of performance and energy saving, in particular on the optimisation of the synchronisation protocol using cross-layer design method such as synchronisation between layers. Our approach consists in balancing the energy consumption between the sensors and choosing the cluster head with the highest residual energy in order to guarantee the reliability, integrity and continuity of communication (i.e. maximising the network lifetime).

  5. First on-sky results of a neural network based tomographic reconstructor: Carmen on Canary

    NASA Astrophysics Data System (ADS)

    Osborn, J.; Guzman, D.; de Cos Juez, F. J.; Basden, A. G.; Morris, T. J.; Gendron, É.; Butterley, T.; Myers, R. M.; Guesalaga, A.; Sanchez Lasheras, F.; Gomez Victoria, M.; Sánchez Rodríguez, M. L.; Gratadour, D.; Rousset, G.

    2014-07-01

    We present on-sky results obtained with Carmen, an artificial neural network tomographic reconstructor. It was tested during two nights in July 2013 on Canary, an AO demonstrator on the William Hershel Telescope. Carmen is trained during the day on the Canary calibration bench. This training regime ensures that Carmen is entirely flexible in terms of atmospheric turbulence profile, negating any need to re-optimise the reconstructor in changing atmospheric conditions. Carmen was run in short bursts, interlaced with an optimised Learn and Apply reconstructor. We found the performance of Carmen to be approximately 5% lower than that of Learn and Apply.

  6. Prediction of road traffic death rate using neural networks optimised by genetic algorithm.

    PubMed

    Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari

    2015-01-01

    Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.

  7. Improved head direction command classification using an optimised Bayesian neural network.

    PubMed

    Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James

    2006-01-01

    Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.

  8. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  9. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  10. Strategic optimisation of microgrid by evolving a unitised regenerative fuel cell system operational criterion

    NASA Astrophysics Data System (ADS)

    Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh

    2016-09-01

    In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.

  11. A Closed-Loop Optimal Neural-Network Controller to Optimize Rotorcraft Aeromechanical Behaviour. Volume 1; Theory and Methodology

    NASA Technical Reports Server (NTRS)

    Leyland, Jane Anne

    2001-01-01

    Given the predicted growth in air transportation, the potential exists for significant market niches for rotary wing subsonic vehicles. Technological advances which optimise rotorcraft aeromechanical behaviour can contribute significantly to both their commercial and military development, acceptance, and sales. Examples of the optimisation of rotorcraft aeromechanical behaviour which are of interest include the minimisation of vibration and/or loads. The reduction of rotorcraft vibration and loads is an important means to extend the useful life of the vehicle and to improve its ride quality. Although vibration reduction can be accomplished by using passive dampers and/or tuned masses, active closed-loop control has the potential to reduce vibration and loads throughout a.wider flight regime whilst requiring less additional weight to the aircraft man that obtained by using passive methads. It is ernphasised that the analysis described herein is applicable to all those rotorcraft aeromechanical behaviour optimisation problems for which the relationship between the harmonic control vector and the measurement vector can be adequately described by a neural-network model.

  12. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  13. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  14. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    NASA Astrophysics Data System (ADS)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  15. AllAboard: Visual Exploration of Cellphone Mobility Data to Optimise Public Transport.

    PubMed

    Di Lorenzo, G; Sbodio, M; Calabrese, F; Berlingerio, M; Pinelli, F; Nair, R

    2016-02-01

    The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens' mobility and use data-driven insights to better plan and manage services. With large scale data on mobility patterns, operators can move away from the costly, mostly survey based, transportation planning processes, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. In this paper we present AllAboard, an intelligent tool that analyses cellphone data to help city authorities in visually exploring urban mobility and optimizing public transport. This is performed within a self contained tool, as opposed to the current solutions which rely on a combination of several distinct tools for analysis, reporting, optimisation and planning. An interactive user interface allows transit operators to visually explore the travel demand in both space and time, correlate it with the transit network, and evaluate the quality of service that a transit network provides to the citizens at very fine grain. Operators can visually test scenarios for transit network improvements, and compare the expected impact on the travellers' experience. The system has been tested using real telecommunication data for the city of Abidjan, Ivory Coast, and evaluated from a data mining, optimisation and user prospective.

  16. An integrated and dynamic optimisation model for the multi-level emergency logistics network in anti-bioterrorism system

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Zhao, Lindu

    2012-08-01

    Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.

  17. A generic methodology for the optimisation of sewer systems using stochastic programming and self-optimizing control.

    PubMed

    Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan

    2015-05-15

    The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Biolithography: Slime mould patterning of polyaniline

    NASA Astrophysics Data System (ADS)

    Berzina, Tatiana; Dimonte, Alice; Adamatzky, Andrew; Erokhin, Victor; Iannotta, Salvatore

    2018-03-01

    Slime mould Physarum polycephalum develops intricate patterns of protoplasmic networks when foraging on a non-nutrient substrates. The networks are optimised for spanning larger spaces with minimum body mass and for quick transfer of nutrients and metabolites inside the slime mould's body. We hybridise the slime mould's networks with conductive polymer polyaniline and thus produce micro-patterns of conductive networks. This unconventional lithographic method opens new perspectives in development of living technology devices, biocompatible non-silicon hardware for applications in integrated circuits, bioelectronics, and biosensing.

  19. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    NASA Astrophysics Data System (ADS)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  20. Neural network feedforward control of a closed-circuit wind tunnel

    NASA Astrophysics Data System (ADS)

    Sutcliffe, Peter

    Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.

  1. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  2. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  3. Bright-White Beetle Scales Optimise Multiple Scattering of Light

    NASA Astrophysics Data System (ADS)

    Burresi, Matteo; Cortese, Lorenzo; Pattelli, Lorenzo; Kolle, Mathias; Vukusic, Peter; Wiersma, Diederik S.; Steiner, Ullrich; Vignolini, Silvia

    2014-08-01

    Whiteness arises from diffuse and broadband reflection of light typically achieved through optical scattering in randomly structured media. In contrast to structural colour due to coherent scattering, white appearance generally requires a relatively thick system comprising randomly positioned high refractive-index scattering centres. Here, we show that the exceptionally bright white appearance of Cyphochilus and Lepidiota stigma beetles arises from a remarkably optimised anisotropy of intra-scale chitin networks, which act as a dense scattering media. Using time-resolved measurements, we show that light propagating in the scales of the beetles undergoes pronounced multiple scattering that is associated with the lowest transport mean free path reported to date for low-refractive-index systems. Our light transport investigation unveil high level of optimisation that achieves high-brightness white in a thin low-mass-per-unit-area anisotropic disordered nanostructure.

  4. Optimising reverse logistics network to support policy-making in the case of Electrical and Electronic Equipment.

    PubMed

    Achillas, Ch; Vlachokostas, Ch; Aidonis, D; Moussiopoulos, N; Iakovou, E; Banias, G

    2010-12-01

    Due to the rapid growth of Waste Electrical and Electronic Equipment (WEEE) volumes, as well as the hazardousness of obsolete electr(on)ic goods, this type of waste is now recognised as a priority stream in the developed countries. Policy-making related to the development of the necessary infrastructure and the coordination of all relevant stakeholders is crucial for the efficient management and viability of individually collected waste. This paper presents a decision support tool for policy-makers and regulators to optimise electr(on)ic products' reverse logistics network. To that effect, a Mixed Integer Linear Programming mathematical model is formulated taking into account existing infrastructure of collection points and recycling facilities. The applicability of the developed model is demonstrated employing a real-world case study for the Region of Central Macedonia, Greece. The paper concludes with presenting relevant obtained managerial insights. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Optimal integrated management of groundwater resources and irrigated agriculture in arid coastal regions

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Heck, V.

    2014-09-01

    Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.

  6. Conceptualising Changes to Pre-Service Teachers' Knowledge of How to Best Facilitate Learning in Mathematics: A TPACK Inspired Initiative

    ERIC Educational Resources Information Center

    Bate, Frank G.; Day, Lorraine; Macnish, Jean

    2013-01-01

    In 2010, the Australian Commonwealth government initiated an $8m project called Teaching Teachers for the Future. The aim of the project was to engage teacher educators in a professional learning network which focused on optimising exemplary use of information and communications technologies in teacher education. By taking part in this network,…

  7. Machine learning prediction for classification of outcomes in local minimisation

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2017-01-01

    Machine learning schemes are employed to predict which local minimum will result from local energy minimisation of random starting configurations for a triatomic cluster. The input data consists of structural information at one or more of the configurations in optimisation sequences that converge to one of four distinct local minima. The ability to make reliable predictions, in terms of the energy or other properties of interest, could save significant computational resources in sampling procedures that involve systematic geometry optimisation. Results are compared for two energy minimisation schemes, and for neural network and quadratic functions of the inputs.

  8. Systemic solutions for multi-benefit water and environmental management.

    PubMed

    Everard, Mark; McInnes, Robert

    2013-09-01

    The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Photonic simulation of entanglement growth and engineering after a spin chain quench.

    PubMed

    Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio

    2017-11-17

    The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.

  10. Optimisation of warpage on thin shell plastic part using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.

  11. Use of a genetic algorithm to improve the rail profile on Stockholm underground

    NASA Astrophysics Data System (ADS)

    Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon

    2010-12-01

    In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.

  12. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  13. An integrated modelling and multicriteria analysis approach to managing nitrate diffuse pollution: 2. A case study for a chalk catchment in England.

    PubMed

    Koo, B K; O'Connell, P E

    2006-04-01

    The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.

  14. Impact of malicious servers over trust and reputation models in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Verma, Vinod Kumar; Singh, Surinder; Pathak, N. P.

    2016-03-01

    This article deals with the impact of malicious servers over different trust and reputation models in wireless sensor networks. First, we analysed the five trust and reputation models, namely BTRM-WSN, Eigen trust, peer trust, power trust, linguistic fuzzy trust model. Further, we proposed wireless sensor network design for optimisation of these models. Finally, influence of malicious servers on the behaviour of above mentioned trust and reputation models is discussed. Statistical analysis has been carried out to prove the validity of our proposal.

  15. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.

    PubMed

    Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil

    2006-06-01

    In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.

  16. An optimisation methodology of artificial neural network models for predicting solar radiation: a case study

    NASA Astrophysics Data System (ADS)

    Rezrazi, Ahmed; Hanini, Salah; Laidi, Maamar

    2016-02-01

    The right design and the high efficiency of solar energy systems require accurate information on the availability of solar radiation. Due to the cost of purchase and maintenance of the radiometers, these data are not readily available. Therefore, there is a need to develop alternative ways of generating such data. Artificial neural networks (ANNs) are excellent and effective tools for learning, pinpointing or generalising data regularities, as they have the ability to model nonlinear functions; they can also cope with complex `noisy' data. The main objective of this paper is to show how to reach an optimal model of ANNs for applying in prediction of solar radiation. The measured data of the year 2007 in Ghardaïa city (Algeria) are used to demonstrate the optimisation methodology. The performance evaluation and the comparison of results of ANN models with measured data are made on the basis of mean absolute percentage error (MAPE). It is found that MAPE in the ANN optimal model reaches 1.17 %. Also, this model yields a root mean square error (RMSE) of 14.06 % and an MBE of 0.12. The accuracy of the outputs exceeded 97 % and reached up 99.29 %. Results obtained indicate that the optimisation strategy satisfies practical requirements. It can successfully be generalised for any location in the world and be used in other fields than solar radiation estimation.

  17. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A. J.; Jarvis, S. J.; Hewitt, C. N.

    2015-10-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end-use. With respect to energy, the growth of industrial society appears to have been near-exponential for the last 160 years. We provide evidence that indicates that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near-optimal directed networks (roads, railways, flight paths, pipelines, cables etc.). However, despite this continual striving for optimisation, the distribution efficiencies of these networks must decline over time as they expand due to path lengths becoming longer and more tortuous. Therefore, to maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system, namely at the points of acquisition and end-use of resources. We postulate that the maintenance of the growth of industrial society, as measured by global energy use, at the observed rate of ~ 2.4 % yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  18. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  19. Convex relaxations for gas expansion planning

    DOE PAGES

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less

  20. Optimisation in the Design of Environmental Sensor Networks with Robustness Consideration

    PubMed Central

    Budi, Setia; de Souza, Paulo; Timms, Greg; Malhotra, Vishv; Turner, Paul

    2015-01-01

    This work proposes the design of Environmental Sensor Networks (ESN) through balancing robustness and redundancy. An Evolutionary Algorithm (EA) is employed to find the optimal placement of sensor nodes in the Region of Interest (RoI). Data quality issues are introduced to simulate their impact on the performance of the ESN. Spatial Regression Test (SRT) is also utilised to promote robustness in data quality of the designed ESN. The proposed method provides high network representativeness (fit for purpose) with minimum sensor redundancy (cost), and ensures robustness by enabling the network to continue to achieve its objectives when some sensors fail. PMID:26633392

  1. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  2. Effective augmentation of networked systems and enhancing pinning controllability

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2018-06-01

    Controlling dynamics of networked systems to a reference state, known as pinning control, has many applications in science and engineering. In this paper, we introduce a method for effective augmentation of networked systems, while also providing high levels of pinning controllability for the final augmented network. The problem is how to connect a sub-network to an already existing network such that the pinning controllability is maximised. We consider the eigenratio of the augmented Laplacian matrix as a pinning controllability metric, and use graph perturbation theory to approximate the influence of edge addition on the eigenratio. The proposed metric can be effectively used to find the inter-network links connecting the disjoint networks. Also, an efficient link rewiring approach is proposed to further optimise the pinning controllability of the augmented network. We provide numerical simulations on synthetic networks and show that the proposed method is more effective than heuristic ones.

  3. A centre-free approach for resource allocation with lower bounds

    NASA Astrophysics Data System (ADS)

    Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly

    2017-09-01

    Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.

  4. Stochastic optimisation of water allocation on a global scale

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.

    2014-05-01

    Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.

  5. Automatic disease diagnosis using optimised weightless neural networks for low-power wearable devices

    PubMed Central

    Edla, Damodar Reddy; Kuppili, Venkatanareshbabu; Dharavath, Ramesh; Beechu, Nareshkumar Reddy

    2017-01-01

    Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues. PMID:28868148

  6. Using Network Dynamical Influence to Drive Consensus

    NASA Astrophysics Data System (ADS)

    Punzo, Giuliano; Young, George F.; MacDonald, Malcolm; Leonard, Naomi E.

    2016-05-01

    Consensus and decision-making are often analysed in the context of networks, with many studies focusing attention on ranking the nodes of a network depending on their relative importance to information routing. Dynamical influence ranks the nodes with respect to their ability to influence the evolution of the associated network dynamical system. In this study it is shown that dynamical influence not only ranks the nodes, but also provides a naturally optimised distribution of effort to steer a network from one state to another. An example is provided where the “steering” refers to the physical change in velocity of self-propelled agents interacting through a network. Distinct from other works on this subject, this study looks at directed and hence more general graphs. The findings are presented with a theoretical angle, without targeting particular applications or networked systems; however, the framework and results offer parallels with biological flocks and swarms and opportunities for design of technological networks.

  7. "Talent Circulators" in Shanghai: Return Migrants and Their Strategies for Success

    ERIC Educational Resources Information Center

    Huang, Yedan; Kuah-Pearce, Khun Eng

    2015-01-01

    This paper argues for a flexible identity and citizenship framework to explore how return migrants, "haigui," have readapted and re-established themselves back into Shanghai society, and how they have used their talents, knowledge and "guanxi" networks to optimise their chances of success. It argues that these return migrants,…

  8. A novel wavelet neural network based pathological stage detection technique for an oral precancerous condition

    PubMed Central

    Paul, R R; Mukherjee, A; Dutta, P K; Banerjee, S; Pal, M; Chatterjee, J; Chaudhuri, K; Mukkerjee, K

    2005-01-01

    Aim: To describe a novel neural network based oral precancer (oral submucous fibrosis; OSF) stage detection method. Method: The wavelet coefficients of transmission electron microscopy images of collagen fibres from normal oral submucosa and OSF tissues were used to choose the feature vector which, in turn, was used to train the artificial neural network. Results: The trained network was able to classify normal and oral precancer stages (less advanced and advanced) after obtaining the image as an input. Conclusions: The results obtained from this proposed technique were promising and suggest that with further optimisation this method could be used to detect and stage OSF, and could be adapted for other conditions. PMID:16126873

  9. Strong constraint on modelled global carbon uptake using solar-induced chlorophyll fluorescence data.

    PubMed

    MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias

    2018-01-31

    Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.

  10. Re-Purposing Google Maps Visualisation for Teaching Logistics Systems

    ERIC Educational Resources Information Center

    Cheong, France; Cheong, Christopher; Jie, Ferry

    2012-01-01

    Routing is the process of selecting appropriate paths and ordering waypoints in a network. It plays an important part in logistics and supply chain management as choosing the optimal route can minimise distribution costs. Routing optimisation, however, is a difficult problem to solve and computer software is often used to determine the best route.…

  11. Efficient embedding of complex networks to hyperbolic space via their Laplacian

    PubMed Central

    Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.

    2016-01-01

    The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction. PMID:27445157

  12. Efficient embedding of complex networks to hyperbolic space via their Laplacian

    NASA Astrophysics Data System (ADS)

    Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.

    2016-07-01

    The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction.

  13. A joint technique for sidelobe suppression and peak-to-average power ratio reduction in non-contiguous OFDM-based cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Sivadas, Namitha Arackal; Mohammed, Sameer Saheerudeen

    2017-02-01

    In non-contiguous orthogonal frequency division multiplexing (NC-OFDM)-based interweave cognitive radio networks, the sidelobe power of secondary users (SUs) must be strictly controlled to avoid the interference between the SUs and the primary users (PUs) of the adjacent bands. Similarly, the inherent issue of high peak-to-average power ratio (PAPR) of the OFDM signal is another drawback of the cognitive radio communication system based on the NC-OFDM technology. A few methods are available in the literature to solve either of these problems individually, while in this paper, we propose a new method for the joint minimisation of sidelobe power and PAPR in NC-OFDM-based cognitive radio networks using Zadoff-Chu (ZC) sequence. In this method, the sidelobe power suppression of SUs is benefited by PUs and the PAPR is reduced for SUs. We modelled a new optimisation problem for minimising the sidelobe power with a constraint on the maximum tolerable PAPR and sidelobe power. The proper selection of ZC sequence, which is crucial for minimising both the issues simultaneously, is achieved by solving the proposed optimisation problem. The proposed technique is shown to provide 7 dB and 20 dB reduction in PAPR and sidelobe power, respectively, without causing any signal distortion along with the improvement in bit error rate (BER) performance.

  14. Improving linear transport infrastructure efficiency by automated learning and optimised predictive maintenance techniques (INFRALERT)

    NASA Astrophysics Data System (ADS)

    Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele

    2017-09-01

    The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.

  15. Bringing metabolic networks to life: convenience rate law and thermodynamic constraints

    PubMed Central

    Liebermeister, Wolfram; Klipp, Edda

    2006-01-01

    Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669

  16. The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.

    2012-04-01

    For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.

  17. Optimisation of MSW collection routes for minimum fuel consumption using 3D GIS modelling.

    PubMed

    Tavares, G; Zsigraiova, Z; Semiao, V; Carvalho, M G

    2009-03-01

    Collection of municipal solid waste (MSW) may account for more than 70% of the total waste management budget, most of which is for fuel costs. It is therefore crucial to optimise the routing network used for waste collection and transportation. This paper proposes the use of geographical information systems (GIS) 3D route modelling software for waste collection and transportation, which adds one more degree of freedom to the system and allows driving routes to be optimised for minimum fuel consumption. The model takes into account the effects of road inclination and vehicle weight. It is applied to two different cases: routing waste collection vehicles in the city of Praia, the capital of Cape Verde, and routing the transport of waste from different municipalities of Santiago Island to an incineration plant. For the Praia city region, the 3D model that minimised fuel consumption yielded cost savings of 8% as compared with an approach that simply calculated the shortest 3D route. Remarkably, this was true despite the fact that the GIS-recommended fuel reduction route was actually 1.8% longer than the shortest possible travel distance. For the Santiago Island case, the difference was even more significant: a 12% fuel reduction for a similar total travel distance. These figures indicate the importance of considering both the relief of the terrain and fuel consumption in selecting a suitable cost function to optimise vehicle routing.

  18. Optimising the production of succinate and lactate in Escherichia coli using a hybrid of artificial bee colony algorithm and minimisation of metabolic adjustment.

    PubMed

    Tang, Phooi Wah; Choon, Yee Wen; Mohamad, Mohd Saberi; Deris, Safaai; Napis, Suhaimi

    2015-03-01

    Metabolic engineering is a research field that focuses on the design of models for metabolism, and uses computational procedures to suggest genetic manipulation. It aims to improve the yield of particular chemical or biochemical products. Several traditional metabolic engineering methods are commonly used to increase the production of a desired target, but the products are always far below their theoretical maximums. Using numeral optimisation algorithms to identify gene knockouts may stall at a local minimum in a multivariable function. This paper proposes a hybrid of the artificial bee colony (ABC) algorithm and the minimisation of metabolic adjustment (MOMA) to predict an optimal set of solutions in order to optimise the production rate of succinate and lactate. The dataset used in this work was from the iJO1366 Escherichia coli metabolic network. The experimental results include the production rate, growth rate and a list of knockout genes. From the comparative analysis, ABCMOMA produced better results compared to previous works, showing potential for solving genetic engineering problems. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  19. A novel swarm intelligence algorithm for finding DNA motifs.

    PubMed

    Lei, Chengwei; Ruan, Jianhua

    2009-01-01

    Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.

  20. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    NASA Astrophysics Data System (ADS)

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  1. Body area network--a key infrastructure element for patient-centered telemedicine.

    PubMed

    Norgall, Thomas; Schmidt, Robert; von der Grün, Thomas

    2004-01-01

    The Body Area Network (BAN) extends the range of existing wireless network technologies by an ultra-low range, ultra-low power network solution optimised for long-term or continuous healthcare applications. It enables wireless radio communication between several miniaturised, intelligent Body Sensor (or actor) Units (BSU) and a single Body Central Unit (BCU) worn at the human body. A separate wireless transmission link from the BCU to a network access point--using different technology--provides for online access to BAN components via usual network infrastructure. The BAN network protocol maintains dynamic ad-hoc network configuration scenarios and co-existence of multiple networks.BAN is expected to become a basic infrastructure element for electronic health services: By integrating patient-attached sensors and mobile actor units, distributed information and data processing systems, the range of medical workflow can be extended to include applications like wireless multi-parameter patient monitoring and therapy support. Beyond clinical use and professional disease management environments, private personal health assistance scenarios (without financial reimbursement by health agencies / insurance companies) enable a wide range of applications and services in future pervasive computing and networking environments.

  2. Large-scale quantum networks based on graphs

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  3. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  4. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  5. Undermining and Strengthening Social Networks through Network Modification

    PubMed Central

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-01-01

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention. PMID:27703198

  6. Undermining and Strengthening Social Networks through Network Modification.

    PubMed

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-10-05

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention.

  7. Undermining and Strengthening Social Networks through Network Modification

    NASA Astrophysics Data System (ADS)

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-10-01

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention.

  8. Enriching regulatory networks by bootstrap learning using optimised GO-based gene similarity and gene links mined from PubMed abstracts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2011-02-18

    Transcriptional regulatory networks are being determined using “reverse engineering” methods that infer connections based on correlations in gene state. Corroboration of such networks through independent means such as evidence from the biomedical literature is desirable. Here, we explore a novel approach, a bootstrapping version of our previous Cross-Ontological Analytic method (XOA) that can be used for semi-automated annotation and verification of inferred regulatory connections, as well as for discovery of additional functional relationships between the genes. First, we use our annotation and network expansion method on a biological network learned entirely from the literature. We show how new relevant linksmore » between genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. Second, we apply our method to annotation, verification, and expansion of a set of regulatory connections found by the Context Likelihood of Relatedness algorithm.« less

  9. An improved spanning tree approach for the reliability analysis of supply chain collaborative network

    NASA Astrophysics Data System (ADS)

    Lam, C. Y.; Ip, W. H.

    2012-11-01

    A higher degree of reliability in the collaborative network can increase the competitiveness and performance of an entire supply chain. As supply chain networks grow more complex, the consequences of unreliable behaviour become increasingly severe in terms of cost, effort and time. Moreover, it is computationally difficult to calculate the network reliability of a Non-deterministic Polynomial-time hard (NP-hard) all-terminal network using state enumeration, as this may require a huge number of iterations for topology optimisation. Therefore, this paper proposes an alternative approach of an improved spanning tree for reliability analysis to help effectively evaluate and analyse the reliability of collaborative networks in supply chains and reduce the comparative computational complexity of algorithms. Set theory is employed to evaluate and model the all-terminal reliability of the improved spanning tree algorithm and present a case study of a supply chain used in lamp production to illustrate the application of the proposed approach.

  10. A distributed predictive control approach for periodic flow-based networks: application to drinking water systems

    NASA Astrophysics Data System (ADS)

    Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-10-01

    This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.

  11. Social support differentially moderates the impact of neuroticism and extraversion on mental wellbeing among community-dwelling older adults.

    PubMed

    McHugh, J E; Lawlor, B A

    2012-10-01

    Personality affects psychological wellbeing, and social support networks may mediate this effect. This may be particularly pertinent in later life, when social structures change significantly, and can lead to a decline in psychological wellbeing. To examine, in an older population, whether the relationships between neuroticism and extraversion and mental wellbeing are moderated by available social support networks. We gathered information from 536 community-dwelling older adults, regarding personality, social support networks, depressive symptomatology, anxiety and perceived stress, as well as controlling for age and gender. Neuroticism and extraversion interacted with social support networks to determine psychological wellbeing (depression, stress and anxiety). High scores on the social support networks measure appear to be protective against the deleterious effects of high scores on the neuroticism scale on psychological wellbeing. Meanwhile, individuals high in extraversion appear to require large social support networks in order to maintain psychological wellbeing. Large familial and friendship social support networks are associated with good psychological wellbeing. To optimise psychological wellbeing in older adults, improving social support networks may be differentially effective for different personality types.

  12. Casemix Funding Optimisation: Working Together to Make the Most of Every Episode.

    PubMed

    Uzkuraitis, Carly; Hastings, Karen; Torney, Belinda

    2010-10-01

    Eastern Health, a large public Victorian Healthcare network, conducted a WIES optimisation audit across the casemix-funded sites for separations in the 2009/2010 financial year. The audit was conducted using existing staff resources and resulted in a significant increase in casemix funding at a minimal cost. The audit showcased the skill set of existing staff and resulted in enormous benefits to the coding and casemix team by demonstrating the value of the combination of skills that makes clinical coders unique. The development of an internal web-based application allowed accurate and timely reporting of the audit results, providing the basis for a restructure of the coding and casemix service, along with approval for additional staffing resources and inclusion of a regular auditing program to focus on the creation of high quality data for research, health services management and financial reimbursement.

  13. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  14. The tensor network theory library

    NASA Astrophysics Data System (ADS)

    Al-Assam, S.; Clark, S. R.; Jaksch, D.

    2017-09-01

    In this technical paper we introduce the tensor network theory (TNT) library—an open-source software project aimed at providing a platform for rapidly developing robust, easy to use and highly optimised code for TNT calculations. The objectives of this paper are (i) to give an overview of the structure of TNT library, and (ii) to help scientists decide whether to use the TNT library in their research. We show how to employ the TNT routines by giving examples of ground-state and dynamical calculations of one-dimensional bosonic lattice system. We also discuss different options for gaining access to the software available at www.tensornetworktheory.org.

  15. Allocation of solid waste collection bins and route optimisation using geographical information system: A case study of Dhanbad City, India.

    PubMed

    Khan, D; Samadder, S R

    2016-07-01

    Collection of municipal solid waste is one of the most important elements of municipal waste management and requires maximum fund allocated for waste management. The cost of collection and transportation can be reduced in comparison with the present scenario if the solid waste collection bins are located at suitable places so that the collection routes become minimum. This study presents a suitable solid waste collection bin allocation method at appropriate places with uniform distance and easily accessible location so that the collection vehicle routes become minimum for the city Dhanbad, India. The network analyst tool set available in ArcGIS was used to find the optimised route for solid waste collection considering all the required parameters for solid waste collection efficiently. These parameters include the positions of solid waste collection bins, the road network, the population density, waste collection schedules, truck capacities and their characteristics. The present study also demonstrates the significant cost reductions that can be obtained compared with the current practices in the study area. The vehicle routing problem solver tool of ArcGIS was used to identify the cost-effective scenario for waste collection, to estimate its running costs and to simulate its application considering both travel time and travel distance simultaneously. © The Author(s) 2016.

  16. Classification of osteoporosis by artificial neural network based on monarch butterfly optimisation algorithm.

    PubMed

    Devikanniga, D; Joshua Samuel Raj, R

    2018-04-01

    Osteoporosis is a life threatening disease which commonly affects women mostly after their menopause. It primarily causes mild bone fractures, which on advanced stage leads to the death of an individual. The diagnosis of osteoporosis is done based on bone mineral density (BMD) values obtained through various clinical methods experimented from various skeletal regions. The main objective of the authors' work is to develop a hybrid classifier model that discriminates the osteoporotic patient from healthy person, based on BMD values. In this Letter, the authors propose the monarch butterfly optimisation-based artificial neural network classifier which helps in earlier diagnosis and prevention of osteoporosis. The experiments were conducted using 10-fold cross-validation method for two datasets lumbar spine and femoral neck. The results were compared with other similar hybrid approaches. The proposed method resulted with the accuracy, specificity and sensitivity of 97.9% ± 0.14, 98.33% ± 0.03 and 95.24% ± 0.08, respectively, for lumbar spine dataset and 99.3% ± 0.16%, 99.2% ± 0.13 and 100, respectively, for femoral neck dataset. Further, its performance is compared using receiver operating characteristics analysis and Wilcoxon signed-rank test. The results proved that the proposed classifier is efficient and it outperformed the other approaches in all the cases.

  17. Developing a spinal cord injury research strategy using a structured process of evidence review and stakeholder dialogue. Part III: outcomes.

    PubMed

    Middleton, J W; Piccenna, L; Lindsay Gruen, R; Williams, S; Creasey, G; Dunlop, S; Brown, D; Batchelor, P E; Berlowitz, D J; Coates, S; Dunn, J A; Furness, J B; Galea, M P; Geraghty, T; Kwon, B K; Urquhart, S; Yates, D; Bragge, P

    2015-10-01

    Focus Group. To develop a unified, regional spinal cord injury (SCI) research strategy for Australia and New Zealand. Australia. A 1-day structured stakeholder dialogue was convened in 2013 in Melbourne, Australia, by the National Trauma Research Institute in collaboration with the SCI Network of Australia and New Zealand. Twenty-three experts participated, representing local and international research, clinical, consumer, advocacy, government policy and funding perspectives. Preparatory work synthesised evidence and articulated draft principles and options as a starting point for discussion. A regional SCI research strategy was proposed, whose objectives can be summarised under four themes. (1) Collaborative networks and strategic partnerships to increase efficiency, reduce duplication, build capacity and optimise research funding. (2) Research priority setting and coordination to manage competing studies. (3) Mechanisms for greater consumer engagement in research. (4) Resources and infrastructure to further develop SCI data registries, evaluate research translation and assess alignment of research strategy with stakeholder interests. These are consistent with contemporary international SCI research strategy development activities. This first step in a regional SCI research strategy has articulated objectives for further development by the wider SCI research community. The initiative has also reinforced the importance of coordinated, collective action in optimising outcomes following SCI.

  18. Multiobjective assessment of distributed energy storage location in electricity networks

    NASA Astrophysics Data System (ADS)

    Ribeiro Gonçalves, José António; Neves, Luís Pires; Martins, António Gomes

    2017-07-01

    This paper presents a methodology to provide information to a decision maker on the associated impacts, both of economic and technical nature, of possible management schemes of storage units for choosing the best location of distributed storage devices, with a multiobjective optimisation approach based on genetic algorithms. The methodology was applied to a case study, a known distribution network model in which the installation of distributed storage units was tested, using lithium-ion batteries. The obtained results show a significant influence of the charging/discharging profile of batteries on the choice of their best location, as well as the relevance that these choices may have for the different network management objectives, for example, for reducing network energy losses or minimising voltage deviations. Results also show a difficult cost-effectiveness of an energy-only service, with the tested systems, both due to capital cost and due to the efficiency of conversion.

  19. Fronthaul evolution: From CPRI to Ethernet

    NASA Astrophysics Data System (ADS)

    Gomes, Nathan J.; Chanclou, Philippe; Turnbull, Peter; Magee, Anthony; Jungnickel, Volker

    2015-12-01

    It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimised performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronisation issues remain to be solved.

  20. Bacterial Unculturability and the Formation of Intercellular Metabolic Networks.

    PubMed

    Pande, Samay; Kost, Christian

    2017-05-01

    The majority of known bacterial species cannot be cultivated under laboratory conditions. Here we argue that the adaptive emergence of obligate metabolic interactions in natural bacterial communities can explain this pattern. Bacteria commonly release metabolites into the external environment. Accumulating pools of extracellular metabolites create an ecological niche that benefits auxotrophic mutants, which have lost the ability to autonomously produce the corresponding metabolites. In addition to a diffusion-based metabolite transfer, auxotrophic cells can use contact-dependent means to obtain nutrients from other co-occurring cells. Spatial colocalisation and a continuous coevolution further increase the nutritional dependency and optimise fluxes through combined metabolic networks. Thus, bacteria likely function as networks of interacting cells that reciprocally exchange nutrients and biochemical functions rather than as physiologically autonomous units. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits

    NASA Astrophysics Data System (ADS)

    Vellingiri, Govindaraj; Jayabalan, Ramesh

    2018-03-01

    Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.

  2. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  3. A new paradigm in personal dosimetry using LiF:Mg,Cu,P.

    PubMed

    Cassata, J R; Moscovitch, M; Rotunda, J E; Velbeck, K J

    2002-01-01

    The United States Navy has been monitoring personnel for occupational exposure to ionising radiation since 1947. Film was exclusively used until 1973 when thermoluminescence dosemeters were introduced and used to the present time. In 1994, a joint research project between the Naval Dosimetry Center, Georgetown University, and Saint Gobain Crystals and Detectors (formerly Bicron RMP formerly Harshaw TLD) began to develop a state of the art thermoluminescent dosimetry system. The study was conducted from a large-scale dosimetry processor point of view with emphasis on a systems approach. Significant improvements were achieved by replacing the LiF:Mg,Ti with LiF:Mg,Cu,P TL elements due to the significant sensitivity increase, linearity, and negligible hiding. Dosemeter filters were optimised for gamma and X ray energy discrimination using Monte Carlo modelling (MCNP) resulting in significant improvement in accuracy and precision. Further improvements were achieved through the use of neural-network based dose calculation algorithms. Both back propagation and functional link methods were implemented and the data compared with essentially the same results. Several operational aspects of the system are discussed, including (1) background subtraction using control dosemeters, (2) selection criteria for control dosemeters, (3) optimisation of the TLD readers, (4) calibration methodology, and (5) the optimisation of the heating profile.

  4. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  5. A generalised significance test for individual communities in networks.

    PubMed

    Kojaku, Sadamori; Masuda, Naoki

    2018-05-09

    Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.

  6. Monitoring groundwater: optimising networks to take account of cost effectiveness, legal requirements and enforcement realities

    NASA Astrophysics Data System (ADS)

    Allan, A.; Spray, C.

    2013-12-01

    The quality of monitoring networks and modeling in environmental regulation is increasingly important. This is particularly true with respect to groundwater management, where data may be limited, physical processes poorly understood and timescales very long. The powers of regulators may be fatally undermined by poor or non-existent networks, primarily through mismatches between the legal standards that networks must meet, actual capacity and the evidentiary standards of courts. For example, in the second and third implementation reports on the Water Framework Directive, the European Commission drew attention to gaps in the standards of mandatory monitoring networks, where the standard did not meet the reality. In that context, groundwater monitoring networks should provide a reliable picture of groundwater levels and a ';coherent and comprehensive' overview of chemical status so that anthropogenically influenced long-term upward trends in pollutant levels can be tracked. Confidence in this overview should be such that 'the uncertainty from the monitoring process should not add significantly to the uncertainty of controlling the risk', with densities being sufficient to allow assessment of the impact of abstractions and discharges on levels in groundwater bodies at risk. The fact that the legal requirements for the quality of monitoring networks are set out in very vague terms highlights the many variables that can influence the design of monitoring networks. However, the quality of a monitoring network as part of the armory of environmental regulators is potentially of crucial importance. If, as part of enforcement proceedings, a regulator takes an offender to court and relies on conclusions derived from monitoring networks, a defendant may be entitled to question those conclusions. If the credibility, reliability or relevance of a monitoring network can be undermined, because it is too sparse, for example, this could have dramatic consequences on the ability of a regulator to ensure compliance with legal standards. On the other hand, it can be ruinously expensive to set up a monitoring network in remote areas and regulators must therefore balance the cost effectiveness of these networks against the chance that a court might question their fitness for purpose. This presentation will examine how regulators can balance legal standards for monitoring against the cost of developing and maintaining the requisite networks, while still producing observable improvements in water and ecosystem quality backed by legally enforceable sanctions for breaches. Reflecting the findings from the EU-funded GENESIS project, it will look at case law from around the world to assess how tribunals balance competing models, and the extent to which decisions may be revisited in the light of new scientific understanding. Finally, it will make recommendations to assist regulators in optimising their network designs for enforcement.

  7. Resource modelling for control: how hydrogeological modelling can support a water quality monitoring infrastructure

    NASA Astrophysics Data System (ADS)

    Scozzari, Andrea; Doveri, Marco

    2015-04-01

    The knowledge of the physical/chemical processes implied with the exploitation of water bodies for human consumption is an essential tool for the optimisation of the monitoring infrastructure. Due to their increasing importance in the context of human consumption (at least in the EU), this work focuses on groundwater resources. In the framework of drinkable water networks, the physical and data-driven modelling of transport phenomena in groundwater can help optimising the sensor network and validating the acquired data. This work proposes the combined usage of physical and data-driven modelling as a support to the design and maximisation of results from a network of distributed sensors. In particular, the validation of physico-chemical measurements and the detection of eventual anomalies by a set of continuous measurements take benefit from the knowledge of the domain from which water is abstracted, and its expected characteristics. Change-detection techniques based on non-specific sensors (presented by quite a large literature during the last two decades) have to deal with the classical issues of maximising correct detections and minimising false alarms, the latter of the two being the most typical problem to be faced, in the view of designing truly applicable monitoring systems. In this context, the definition of "anomaly" in terms of distance from an expected value or feature characterising the quality of water implies the definition of a suitable metric and the knowledge of the physical and chemical peculiarities of the natural domain from which water is exploited, with its implications in terms of characteristics of the water resource.

  8. Ultra-High Capacity Networking Enabled By Optical Technologies

    DTIC Science & Technology

    2003-08-01

    interface that raises an interrupt regardless of what the kernel may be doing. Piglet also uses fast wakeup signals to optimise scheduling in certain...executed which switches context driectly to the now-runnable application i.e., without invoking the scheduler . Since the fast wakeup handler does not...and scheduling , the first one has been built and tested. 10 Gb/s data patterns have been generated with this board. A systems experiment has

  9. Using heuristic algorithms for capacity leasing and task allocation issues in telecommunication networks under fuzzy quality of service constraints

    NASA Astrophysics Data System (ADS)

    Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin

    2014-03-01

    Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.

  10. Path scheduling for multiple mobile actors in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Trapasiya, Samir D.; Soni, Himanshu B.

    2017-05-01

    In wireless sensor network (WSN), energy is the main constraint. In this work we have addressed this issue for single as well as multiple mobile sensor actor network. In this work, we have proposed Rendezvous Point Selection Scheme (RPSS) in which Rendezvous Nodes are selected by set covering problem approach and from that, Rendezvous Points are selected in a way to reduce the tour length. The mobile actors tour is scheduled to pass through those Rendezvous Points as per Travelling Salesman Problem (TSP). We have also proposed novel rendezvous node rotation scheme for fair utilisation of all the nodes. We have compared RPSS with Stationery Actor scheme as well as RD-VT, RD-VT-SMT and WRP-SMT for performance metrics like energy consumption, network lifetime, route length and found the better outcome in all the cases for single actor. We have also applied RPSS for multiple mobile actor case like Multi-Actor Single Depot (MASD) termination and Multi-Actor Multiple Depot (MAMD) termination and observed by extensive simulation that MAMD saves the network energy in optimised way and enhance network lifetime compared to all other schemes.

  11. Identification of Text and Symbols on a Liquid Crystal Display Part 2: Contrast and Luminance Settings to Optimise Legibility

    DTIC Science & Technology

    2009-02-01

    Measurements on Chart Design and Scoring Rule. Optometry and Vision Science, 79(12), 768-792. ISO. (1998). EN ISO 9241-11. Ergonomic Requirements for...Human Factors from the University of Queensland. He began his career designing and building computerised electronics for the theatre. Following this...to optical detection. Recent work includes the assessment of networked naval gunfire support, ergonomic assessments of combat system consoles and

  12. Allergy medical care network: a new model of care for specialties.

    PubMed

    Ferré-Ybarz, L; Salinas Argente, R; Nevot Falcó, S; Gómez Galán, C; Franquesa Rabat, J; Trapé Pujol, J; Oliveras Alsina, P; Pons Serra, M; Corbella Virós, X

    2015-01-01

    In 2005 the Althaia Foundation Allergy Department performed its daily activity in the Hospital Sant Joan de Deu of Manresa. Given the increasing demand for allergy care, the department's performance was analysed and a strategic plan (SP) for 2005-2010 was designed. The main objective of the study was to assess the impact of the application of the SP on the department's operations and organisational level in terms of profitability, productivity and quality of care. Descriptive, retrospective study which evaluated the operation of the allergy department. The baseline situation was analysed and the SP was designed. Indicators were set to perform a comparative analysis after application of the SP. The indicators showed an increase in medical care activity (first visits, 34%; successive visits, 29%; day hospital treatments, 51%), high rates of resolution, reduced waiting lists. Economic analysis indicated an increase in direct costs justified by increased activity and territory attended. Cost optimisation was explained by improved patient accessibility, minimised absenteeism in the workplace and improved cost per visit. After application of the SP a networking system was established for the allergy speciality that has expanded the territory for which it provides care, increased total activity and the ability to resolve patients, optimised human resources, improved quality of care and streamlined medical costs. Copyright © 2013 SEICAP. Published by Elsevier Espana. All rights reserved.

  13. Freeze-dried, mucoadhesive system for vaginal delivery of the HIV microbicide, dapivirine: optimisation by an artificial neural network.

    PubMed

    Woolfson, A David; Umrethia, Manish L; Kett, Victoria L; Malcolm, R Karl

    2010-03-30

    Dapivirine mucoadhesive gels and freeze-dried tablets were prepared using a 3x3x2 factorial design. An artificial neural network (ANN) with multi-layer perception was used to investigate the effect of hydroxypropyl-methylcellulose (HPMC): polyvinylpyrrolidone (PVP) ratio (X1), mucoadhesive concentration (X2) and delivery system (gel or freeze-dried mucoadhesive tablet, X3) on response variables; cumulative release of dapivirine at 24h (Q(24)), mucoadhesive force (F(max)) and zero-rate viscosity. Optimisation was performed by minimising the error between the experimental and predicted values of responses by ANN. The method was validated using check point analysis by preparing six formulations of gels and their corresponding freeze-dried tablets randomly selected from within the design space of contour plots. Experimental and predicted values of response variables were not significantly different (p>0.05, two-sided paired t-test). For gels, Q(24) values were higher than their corresponding freeze-dried tablets. F(max) values for freeze-dried tablets were significantly different (2-4 times greater, p>0.05, two-sided paired t-test) compared to equivalent gels. Freeze-dried tablets having lower values for X1 and higher values for X2 components offered the best compromise between effective dapivirine release, mucoadhesion and viscosity such that increased vaginal residence time was likely to be achieved. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  14. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.

  15. Multistrategy Self-Organizing Map Learning for Classification Problems

    PubMed Central

    Hasan, S.; Shamsuddin, S. M.

    2011-01-01

    Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686

  16. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  17. On the performance of energy detection-based CR with SC diversity over IG channel

    NASA Astrophysics Data System (ADS)

    Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka

    2017-12-01

    Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.

  18. An exploration of alternative visualisations of the basic helix-loop-helix protein interaction network

    PubMed Central

    Holden, Brian J; Pinney, John W; Lovell, Simon C; Amoutzias, Grigoris D; Robertson, David L

    2007-01-01

    Background Alternative representations of biochemical networks emphasise different aspects of the data and contribute to the understanding of complex biological systems. In this study we present a variety of automated methods for visualisation of a protein-protein interaction network, using the basic helix-loop-helix (bHLH) family of transcription factors as an example. Results Network representations that arrange nodes (proteins) according to either continuous or discrete information are investigated, revealing the existence of protein sub-families and the retention of interactions following gene duplication events. Methods of network visualisation in conjunction with a phylogenetic tree are presented, highlighting the evolutionary relationships between proteins, and clarifying the context of network hubs and interaction clusters. Finally, an optimisation technique is used to create a three-dimensional layout of the phylogenetic tree upon which the protein-protein interactions may be projected. Conclusion We show that by incorporating secondary genomic, functional or phylogenetic information into network visualisation, it is possible to move beyond simple layout algorithms based on network topology towards more biologically meaningful representations. These new visualisations can give structure to complex networks and will greatly help in interpreting their evolutionary origins and functional implications. Three open source software packages (InterView, TVi and OptiMage) implementing our methods are available. PMID:17683601

  19. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  20. Impact of New Water Sources on the Overall Water Network: An Optimisation Approach

    PubMed Central

    Jones, Brian C.; Hove-Musekwa, Senelani D.

    2014-01-01

    A mathematical programming problem is formulated for a water network with new water sources included. Salinity and water hardness are considered in the model, which is later solved using the Max-Min Ant System (MMAS) to assess the impact of new water sources on the total cost of the existing network. It is efficient to include new water sources if the distances to them are short or if there is a high penalty associated with failure to meet demand. Desalination unit costs also significantly affect the decision whether to install new water sources into the existing network while softening costs are generally negligible in making such decisions. Experimental results show that, in the example considered, it is efficient to reduce number of desalination plants to remain with one central plant. The Max-Min Ant System algorithm seems to be an effective method as shown by least computational time as compared to the commercial solver Cplex. PMID:27382617

  1. Mapping of multiple parameter m-health scenarios to mobile WiMAX QoS variables.

    PubMed

    Alinejad, Ali; Philip, N; Istepanian, R S H

    2011-01-01

    Multiparameter m-health scenarios with bandwidth demanding requirements will be one of key applications in future 4 G mobile communication systems. These applications will potentially require specific spectrum allocations with higher quality of service requirements. Furthermore, one of the key 4 G technologies targeting m-health will be medical applications based on WiMAX systems. Hence, it is timely to evaluate such multiple parametric m-health scenarios over mobile WiMAX networks. In this paper, we address the preliminary performance analysis of mobile WiMAX network for multiparametric telemedical scenarios. In particular, we map the medical QoS to typical WiMAX QoS parameters to optimise the performance of these parameters in typical m-health scenario. Preliminary performance analyses of the proposed multiparametric scenarios are evaluated to provide essential information for future medical QoS requirements and constraints in these telemedical network environments.

  2. Asynchronous sampled-data approach for event-triggered systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Memon, Azhar M.

    2017-11-01

    While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.

  3. A big-data model for multi-modal public transportation with application to macroscopic control and optimisation

    NASA Astrophysics Data System (ADS)

    Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert

    2015-11-01

    This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.

  4. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  5. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  6. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  7. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    PubMed

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  8. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    PubMed Central

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  9. Understanding the Models of Community Hospital rehabilitation Activity (MoCHA): a mixed-methods study

    PubMed Central

    Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen

    2017-01-01

    Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766

  10. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    PubMed

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  11. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    PubMed Central

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  12. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    PubMed

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  13. Design and optimisation of novel configurations of stormwater constructed wetlands

    NASA Astrophysics Data System (ADS)

    Kiiza, Christopher

    2017-04-01

    Constructed wetlands (CWs) are recognised as a cost-effective technology for wastewater treatment. CWs have been deployed and could be retrofitted into existing urban drainage systems to prevent surface water pollution, attenuate floods and act as sources for reusable water. However, there exist numerous criteria for design configuration and operation of CWs. The aim of the study was to examine effects of design and operational variables on performance of CWs. To achieve this, 8 novel designs of vertical flow CWs were continuously operated and monitored (weekly) for 2years. Pollutant removal efficiency in each CW unit was evaluated from physico-chemical analyses of influent and effluent water samples. Hybrid optimised multi-layer perceptron artificial neural networks (MLP ANNs) were applied to simulate treatment efficiency in the CWs. Subsequently, predictive and analytical models were developed for each design unit. Results show models have sound generalisation abilities; with various design configurations and operational variables influencing performance of CWs. Although some design configurations attained faster and higher removal efficiencies than others; all 8 CW designs produced effluents permissible for discharge into watercourses with strict regulatory standards.

  14. Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution

    NASA Astrophysics Data System (ADS)

    Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.

    2017-08-01

    Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.

  15. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  16. Advanced Distribution Network Modelling with Distributed Energy Resources

    NASA Astrophysics Data System (ADS)

    O'Connell, Alison

    The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a three-phase optimal power flow method is developed. The formulation has the capability to provide optimal solutions for distribution system control variables, for a chosen objective function, subject to required constraints. It can, therefore, be utilised for numerous technologies and applications. The three-phase optimal power flow is employed to manage various distributed resources, such as photovoltaics and storage, as well as distribution equipment, including tap changers and switches. The flexibility of the methodology allows it to be applied in both an operational and a planning capacity. The three-phase optimal power flow is employed in an operational planning capacity to determine volt-var curves for distributed photovoltaic inverters. The formulation finds optimal reactive power settings for a number of load and solar scenarios and uses these reactive power points to create volt-var curves. Volt-var curves are determined for 10 PV systems on a test feeder. A universal curve is also determined which is applicable to all inverters. The curves are validated by testing them in a power flow setting over a 24-hour test period. The curves are shown to provide advantages to the feeder in terms of reduction of voltage deviations and unbalance, with the individual curves proving to be more effective. It is also shown that adding a new PV system to the feeder only requires analysis for that system. In order to represent the uncertainties that inherently occur on distribution systems, an information gap decision theory method is also proposed and integrated into the three-phase optimal power flow formulation. This allows for robust network decisions to be made using only an initial prediction for what the uncertain parameter will be. The work determines tap and switch settings for a test network with demand being treated as uncertain. The aim is to keep losses below a predefined acceptable value. The results provide the decision maker with the maximum possible variation in demand for a given acceptable variation in the losses. A validation is performed with the resulting tap and switch settings being implemented, and shows that the control decisions provided by the formulation keep losses below the acceptable value while adhering to the limits imposed by the network.

  17. Understanding the role of hydrogen bonding in the aggregation of fumed silica particles in triglyceride solvents.

    PubMed

    Whitby, Catherine P; Krebsz, Melinda; Booty, Samuel J

    2018-10-01

    Fumed silica particles are thought to thicken organic solvents into gels by aggregating to form networks. Hydrogen bonding between silanol groups on different particle surfaces causes the aggregation. The gel structure and hence flow behaviour is altered by varying the proportion of silanol groups on the particle surfaces. However, characterising the gel using rheology measurements alone is not sufficient to optimise the aggregation. We have used confocal microscopy to characterise the changes in the network microstructure caused by altering the particle surface chemistry. Organogels were formed by dispersing fumed silica nanoparticles in a triglyceride solvent. The particle surface chemistry was systematically varied from oleophobic to oleophilic by functionalisation with hydrocarbons. We directly visualised the particle networks using confocal scanning laser microscopy and investigated the correlations between the network structure and the shear response of the organogels. Our key finding is that the sizes of the pore spaces in the networks depend on the fraction of silanol groups available to form hydrogen bonds. The reduction in the network elasticity of gels formed by methylated particles can be accounted for by the increasing pore size and tenuous nature of the networks. This is the first report that characterises the changes in the microstructure of fumed silica particle networks in non-polar solvents caused by manipulating the particle surface chemistry. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.

    PubMed

    Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas

    2018-02-01

    There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Focal psychodynamic therapy, cognitive behaviour therapy, and optimised treatment as usual in outpatients with anorexia nervosa (ANTOP study): randomised controlled trial.

    PubMed

    Zipfel, Stephan; Wild, Beate; Groß, Gaby; Friederich, Hans-Christoph; Teufel, Martin; Schellberg, Dieter; Giel, Katrin E; de Zwaan, Martina; Dinkel, Andreas; Herpertz, Stephan; Burgmer, Markus; Löwe, Bernd; Tagay, Sefik; von Wietersheim, Jörn; Zeeck, Almut; Schade-Brittinger, Carmen; Schauenburg, Henning; Herzog, Wolfgang

    2014-01-11

    Psychotherapy is the treatment of choice for patients with anorexia nervosa, although evidence of efficacy is weak. The Anorexia Nervosa Treatment of OutPatients (ANTOP) study aimed to assess the efficacy and safety of two manual-based outpatient treatments for anorexia nervosa--focal psychodynamic therapy and enhanced cognitive behaviour therapy--versus optimised treatment as usual. The ANTOP study is a multicentre, randomised controlled efficacy trial in adults with anorexia nervosa. We recruited patients from ten university hospitals in Germany. Participants were randomly allocated to 10 months of treatment with either focal psychodynamic therapy, enhanced cognitive behaviour therapy, or optimised treatment as usual (including outpatient psychotherapy and structured care from a family doctor). The primary outcome was weight gain, measured as increased body-mass index (BMI) at the end of treatment. A key secondary outcome was rate of recovery (based on a combination of weight gain and eating disorder-specific psychopathology). Analysis was by intention to treat. This trial is registered at http://isrctn.org, number ISRCTN72809357. Of 727 adults screened for inclusion, 242 underwent randomisation: 80 to focal psychodynamic therapy, 80 to enhanced cognitive behaviour therapy, and 82 to optimised treatment as usual. At the end of treatment, 54 patients (22%) were lost to follow-up, and at 12-month follow-up a total of 73 (30%) had dropped out. At the end of treatment, BMI had increased in all study groups (focal psychodynamic therapy 0·73 kg/m(2), enhanced cognitive behaviour therapy 0·93 kg/m(2), optimised treatment as usual 0·69 kg/m(2)); no differences were noted between groups (mean difference between focal psychodynamic therapy and enhanced cognitive behaviour therapy -0·45, 95% CI -0·96 to 0·07; focal psychodynamic therapy vs optimised treatment as usual -0·14, -0·68 to 0·39; enhanced cognitive behaviour therapy vs optimised treatment as usual -0·30, -0·22 to 0·83). At 12-month follow-up, the mean gain in BMI had risen further (1·64 kg/m(2), 1·30 kg/m(2), and 1·22 kg/m(2), respectively), but no differences between groups were recorded (0·10, -0·56 to 0·76; 0·25, -0·45 to 0·95; 0·15, -0·54 to 0·83, respectively). No serious adverse events attributable to weight loss or trial participation were recorded. Optimised treatment as usual, combining psychotherapy and structured care from a family doctor, should be regarded as solid baseline treatment for adult outpatients with anorexia nervosa. Focal psychodynamic therapy proved advantageous in terms of recovery at 12-month follow-up, and enhanced cognitive behaviour therapy was more effective with respect to speed of weight gain and improvements in eating disorder psychopathology. Long-term outcome data will be helpful to further adapt and improve these novel manual-based treatment approaches. German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), German Eating Disorders Diagnostic and Treatment Network (EDNET). Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  1. Multi-agent modelling framework for water, energy and other resource networks

    NASA Astrophysics Data System (ADS)

    Knox, S.; Selby, P. D.; Meier, P.; Harou, J. J.; Yoon, J.; Lachaut, T.; Klassert, C. J. A.; Avisse, N.; Mohamed, K.; Tomlinson, J.; Khadem, M.; Tilmant, A.; Gorelick, S.

    2015-12-01

    Bespoke modelling tools are often needed when planning future engineered interventions in the context of various climate, socio-economic and geopolitical futures. Such tools can help improve system operating policies or assess infrastructure upgrades and their risks. A frequently used approach is to simulate and/or optimise the impact of interventions in engineered systems. Modelling complex infrastructure systems can involve incorporating multiple aspects into a single model, for example physical, economic and political. This presents the challenge of combining research from diverse areas into a single system effectively. We present the Pynsim 'Python Network Simulator' framework, a library for building simulation models capable of representing, the physical, institutional and economic aspects of an engineered resources system. Pynsim is an open source, object oriented code aiming to promote integration of different modelling processes through a single code library. We present two case studies that demonstrate important features of Pynsim's design. The first is a large interdisciplinary project of a national water system in the Middle East with modellers from fields including water resources, economics, hydrology and geography each considering different facets of a multi agent system. It includes: modelling water supply and demand for households and farms; a water tanker market with transfer of water between farms and households, and policy decisions made by government institutions at district, national and international level. This study demonstrates that a well-structured library of code can provide a hub for development and act as a catalyst for integrating models. The second focuses on optimising the location of new run-of-river hydropower plants. Using a multi-objective evolutionary algorithm, this study analyses different network configurations to identify the optimal placement of new power plants within a river network. This demonstrates that Pynsim can be used to evaluate a multitude of topologies for identifying the optimal location of infrastructure investments. Pynsim is available on GitHub or via standard python installer packages such as pip. It comes with several examples and online documentation, making it attractive for those less experienced in software engineering.

  2. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  3. Human performance under two different command and control paradigms.

    PubMed

    Walker, Guy H; Stanton, Neville A; Salmon, Paul M; Jenkins, Daniel P

    2014-05-01

    The paradoxical behaviour of a new command and control concept called Network Enabled Capability (NEC) provides the motivation for this paper. In it, a traditional hierarchical command and control organisation was pitted against a network centric alternative on a common task, played thirty times, by two teams. Multiple regression was used to undertake a simple form of time series analysis. It revealed that whilst the NEC condition ended up being slightly slower than its hierarchical counterpart, it was able to balance and optimise all three of the performance variables measured (task time, enemies neutralised and attrition). From this it is argued that a useful conceptual response is not to consider NEC as an end product comprised of networked computers and standard operating procedures, nor to regard the human system interaction as inherently stable, but rather to view it as a set of initial conditions from which the most adaptable component of all can be harnessed: the human. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Localisation of an Unknown Number of Land Mines Using a Network of Vapour Detectors

    PubMed Central

    Chhadé, Hiba Haj; Abdallah, Fahed; Mougharbel, Imad; Gning, Amadou; Julier, Simon; Mihaylova, Lyudmila

    2014-01-01

    We consider the problem of localising an unknown number of land mines using concentration information provided by a wireless sensor network. A number of vapour sensors/detectors, deployed in the region of interest, are able to detect the concentration of the explosive vapours, emanating from buried land mines. The collected data is communicated to a fusion centre. Using a model for the transport of the explosive chemicals in the air, we determine the unknown number of sources using a Principal Component Analysis (PCA)-based technique. We also formulate the inverse problem of determining the positions and emission rates of the land mines using concentration measurements provided by the wireless sensor network. We present a solution for this problem based on a probabilistic Bayesian technique using a Markov chain Monte Carlo sampling scheme, and we compare it to the least squares optimisation approach. Experiments conducted on simulated data show the effectiveness of the proposed approach. PMID:25384008

  5. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A.; Jarvis, S.; Hewitt, N.

    2015-01-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end use. With respect to energy, growth has been near exponential for the last 160 years. We attempt to show that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near optimal directed networks. If so, the distribution efficiencies of these networks must decline as they expand due to path lengths becoming longer and more tortuous. To maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system: namely at the points of acquisition and end use. We postulate that the maintenance of growth at the specific rate of ~2.4% yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  6. Reviews and syntheses: guiding the evolution of the observing system for the carbon cycle through quantitative network design

    NASA Astrophysics Data System (ADS)

    Kaminski, Thomas; Rayner, Peter Julian

    2017-10-01

    Various observational data streams have been shown to provide valuable constraints on the state and evolution of the global carbon cycle. These observations have the potential to reduce uncertainties in past, current, and predicted natural and anthropogenic surface fluxes. In particular such observations provide independent information for verification of actions as requested by the Paris Agreement. It is, however, difficult to decide which variables to sample, and how, where, and when to sample them, in order to achieve an optimal use of the observational capabilities. Quantitative network design (QND) assesses the impact of a given set of existing or hypothetical observations in a modelling framework. QND has been used to optimise in situ networks and assess the benefit to be expected from planned space missions. This paper describes recent progress and highlights aspects that are not yet sufficiently addressed. It demonstrates the advantage of an integrated QND system that can simultaneously evaluate a multitude of observational data streams and assess their complementarity and redundancy.

  7. Neural networks applied to discriminate botanical origin of honeys.

    PubMed

    Anjos, Ofélia; Iglesias, Carla; Peres, Fátima; Martínez, Javier; García, Ángela; Taboada, Javier

    2015-05-15

    The aim of this work is develop a tool based on neural networks to predict the botanical origin of honeys using physical and chemical parameters. The managed database consists of 49 honey samples of 2 different classes: monofloral (almond, holm oak, sweet chestnut, eucalyptus, orange, rosemary, lavender, strawberry trees, thyme, heather, sunflower) and multifloral. The moisture content, electrical conductivity, water activity, ashes content, pH, free acidity, colorimetric coordinates in CIELAB space (L(∗), a(∗), b(∗)) and total phenols content of the honey samples were evaluated. Those properties were considered as input variables of the predictive model. The neural network is optimised through several tests with different numbers of neurons in the hidden layer and also with different input variables. The reduced error rates (5%) allow us to conclude that the botanical origin of honey can be reliably and quickly known from the colorimetric information and the electrical conductivity of honey. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. [Optimising care structures for severe hand trauma and replantation and chances of launching a national network].

    PubMed

    Haas, E M; Volkmer, E; Holzbach, T; Wallmichrath, J; Engelhardt, T O; Giunta, R E

    2013-12-01

    Severe hand traumata have a significant impact on our health system and on insurance companies, respectively. It is estimated that 33% of all occupational injuries and 9% of all invalidity pensions are due to severe hand trauma. Unfortunately, these high numbers are not only due to the severity of the trauma but to organisational deficiencies. Usually, the patient is treated at the general surgical emergency in the first place and only then forwarded to a microsurgeon. This redirection increases the time that is required for the patient to finally arrive at an expert for hand surgery. On the one hand, this problem can be explained by the population's lack of awareness for distinguished experts for hand and microsurgery, on the other hand, the emergency network, or emergency doctors in particular are not well informed about where to take a patient with a severe hand trauma - clearly a problem of communication between the hospitals and the ambulance. It is possible to tackle this problem, but put participating hand trauma centres have to work hand in hand as a network and thus exploit synergy effects. The French system "FESUM" is a good example for such a network and even comprises centres in Belgium and Switzerland. To improve the treatment of severe hand trauma, a similar alliance was initiated in Germany just recently. The pilot project "Hand Trauma Alliance" (www.handverletzung.com) was started in April 2013 and currently comprises two hospitals within the region of upper Bavaria. The network provides hand trauma replantation service on a 24/7 basis and aims at shortening the way from the accident site to the fully qualified hand surgeon, to improve the therapy of severe hand injuries and to optimise acute patient care in general. In order to further increase the alliance's impact it is intended to extend the project's scope from regional to national coverage - nevertheless, such an endeavour can only be done in collaboration with the German Society for Hand Surgery (DGH). This article comprises 2 parts. First, the state-of-the-art of acute severe hand trauma care is summarised and explained. Subsequently, the above-mentioned pilot project is described in every detail, including positive effects but also barriers that still have to be overcome. © Georg Thieme Verlag KG Stuttgart · New York.

  9. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  10. A mathematical programming approach for sequential clustering of dynamic networks

    NASA Astrophysics Data System (ADS)

    Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia

    2016-02-01

    A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  11. Alternative Path Communication in Wide-Scale Cluster-Tree Wireless Sensor Networks Using Inactive Periods

    PubMed Central

    Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-01-01

    The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network. PMID:28481245

  12. Alternative Path Communication in Wide-Scale Cluster-Tree Wireless Sensor Networks Using Inactive Periods.

    PubMed

    Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-05-06

    The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network.

  13. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  14. Remediating radium contaminated legacy sites: Advances made through machine learning in routine monitoring of "hot" particles.

    PubMed

    Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike

    2015-07-15

    The extensive use of radium during the 20th century for industrial, military and pharmaceutical purposes has led to a large number of contaminated legacy sites across Europe and North America. Sites that pose a high risk to the general public can present expensive and long-term remediation projects. Often the most pragmatic remediation approach is through routine monitoring operating gamma-ray detectors to identify, in real-time, the signal from the most hazardous heterogeneous contamination (hot particles); thus facilitating their removal and safe disposal. However, current detection systems do not fully utilise all spectral information resulting in low detection rates and ultimately an increased risk to the human health. The aim of this study was to establish an optimised detector-algorithm combination. To achieve this, field data was collected using two handheld detectors (sodium iodide and lanthanum bromide) and a number of Monte Carlo simulated hot particles were randomly injected into the field data. This allowed for the detection rate of conventional deterministic (gross counts) and machine learning (neural networks and support vector machines) algorithms to be assessed. The results demonstrated that a Neural Network operated on a sodium iodide detector provided the best detection capability. Compared to deterministic approaches, this optimised detection system could detect a hot particle on average 10cm deeper into the soil column or with half of the activity at the same depth. It was also found that noise presented by internal contamination restricted lanthanum bromide for this application. Copyright © 2015. Published by Elsevier B.V.

  15. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  16. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  17. Design of a monitoring network over France in case of a radiological accidental release

    NASA Astrophysics Data System (ADS)

    Abida, Rachid; Bocquet, Marc; Vercauteren, Nikki; Isnard, Olivier

    The Institute of Radiation Protection and Nuclear Safety (France) is planning the set-up of an automatic nuclear aerosol monitoring network over the French territory. Each of the stations will be able to automatically sample the air aerosol content and provide activity concentration measurements on several radionuclides. This should help monitor the French and neighbouring countries nuclear power plants set. It would help evaluate the impact of a radiological incident occurring at one of these nuclear facilities. This paper is devoted to the spatial design of such a network. Here, any potential network is judged on its ability to extrapolate activity concentrations measured on the network stations over the whole domain. The performance of a network is quantitatively assessed through a cost function that measures the discrepancy between the extrapolation and the true concentration fields. These true fields are obtained through the computation of a database of dispersion accidents over one year of meteorology and originating from 20 French nuclear sites. A close to optimal network is then looked for using a simulated annealing optimisation. The results emphasise the importance of the cost function in the design of a network aimed at monitoring an accidental dispersion. Several choices of norm used in the cost function are studied and give way to different designs. The influence of the number of stations is discussed. A comparison with a purely geometric approach which does not involve simulations with a chemistry-transport model is performed.

  18. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  19. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  20. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  1. Cluster-based adaptive power control protocol using Hidden Markov Model for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Vinutha, C. B.; Nalini, N.; Nagaraja, M.

    2017-06-01

    This paper presents strategies for an efficient and dynamic transmission power control technique, in order to reduce packet drop and hence energy consumption of power-hungry sensor nodes operated in highly non-linear channel conditions of Wireless Sensor Networks. Besides, we also focus to prolong network lifetime and scalability by designing cluster-based network structure. Specifically we consider weight-based clustering approach wherein, minimum significant node is chosen as Cluster Head (CH) which is computed stemmed from the factors distance, remaining residual battery power and received signal strength (RSS). Further, transmission power control schemes to fit into dynamic channel conditions are meticulously implemented using Hidden Markov Model (HMM) where probability transition matrix is formulated based on the observed RSS measurements. Typically, CH estimates initial transmission power of its cluster members (CMs) from RSS using HMM and broadcast this value to its CMs for initialising their power value. Further, if CH finds that there are variations in link quality and RSS of the CMs, it again re-computes and optimises the transmission power level of the nodes using HMM to avoid packet loss due noise interference. We have demonstrated our simulation results to prove that our technique efficiently controls the power levels of sensing nodes to save significant quantity of energy for different sized network.

  2. Developing Ubiquitous Sensor Network Platform Using Internet of Things: Application in Precision Agriculture.

    PubMed

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José

    2016-07-22

    The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched.

  3. An intelligent service matching method for mechanical equipment condition monitoring using the fibre Bragg grating sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Zhou, Zude; Liu, Quan; Xu, Wenjun

    2017-02-01

    Due to the advantages of being able to function under harsh environmental conditions and serving as a distributed condition information source in a networked monitoring system, the fibre Bragg grating (FBG) sensor network has attracted considerable attention for equipment online condition monitoring. To provide an overall conditional view of the mechanical equipment operation, a networked service-oriented condition monitoring framework based on FBG sensing is proposed, together with an intelligent matching method for supporting monitoring service management. In the novel framework, three classes of progressive service matching approaches, including service-chain knowledge database service matching, multi-objective constrained service matching and workflow-driven human-interactive service matching, are developed and integrated with an enhanced particle swarm optimisation (PSO) algorithm as well as a workflow-driven mechanism. Moreover, the manufacturing domain ontology, FBG sensor network structure and monitoring object are considered to facilitate the automatic matching of condition monitoring services to overcome the limitations of traditional service processing methods. The experimental results demonstrate that FBG monitoring services can be selected intelligently, and the developed condition monitoring system can be re-built rapidly as new equipment joins the framework. The effectiveness of the service matching method is also verified by implementing a prototype system together with its performance analysis.

  4. Modelling the impact of liner shipping network perturbations on container cargo routing: Southeast Asia to Europe application.

    PubMed

    Achurra-Gonzalez, Pablo E; Novati, Matteo; Foulser-Piggott, Roxane; Graham, Daniel J; Bowman, Gary; Bell, Michael G H; Angeloudis, Panagiotis

    2016-06-03

    Understanding how container routing stands to be impacted by different scenarios of liner shipping network perturbations such as natural disasters or new major infrastructure developments is of key importance for decision-making in the liner shipping industry. The variety of actors and processes within modern supply chains and the complexity of their relationships have previously led to the development of simulation-based models, whose application has been largely compromised by their dependency on extensive and often confidential sets of data. This study proposes the application of optimisation techniques less dependent on complex data sets in order to develop a quantitative framework to assess the impacts of disruptive events on liner shipping networks. We provide a categorization of liner network perturbations, differentiating between systemic and external and formulate a container assignment model that minimises routing costs extending previous implementations to allow feasible solutions when routing capacity is reduced below transport demand. We develop a base case network for the Southeast Asia to Europe liner shipping trade and review of accidents related to port disruptions for two scenarios of seismic and political conflict hazards. Numerical results identify alternative routing paths and costs in the aftermath of port disruptions scenarios and suggest higher vulnerability of intra-regional connectivity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Developing Ubiquitous Sensor Network Platform Using Internet of Things: Application in Precision Agriculture

    PubMed Central

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José

    2016-01-01

    The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched. PMID:27455265

  6. Topology Optimisation for Energy Management in Underwater Sensor Networks

    DTIC Science & Technology

    2015-01-01

    the most efficient communications mech- anism ( Urick , 1994), where messages are transmitted with a communication radius ric for sensor i at a...formula ( Urick , 1994) that is valid in the range of hundreds of Hz to 50 kHz: 10 log10 a[f ] ≈ 0.11f 2 1 + f 2 + 44f 2 4100 + f 2 + 2.75 × 10−4f 2...effects. A reasonable approxi- mation is to take the net effects of noise ( Urick , 1994) as follows: 10 log10 N [f ] ≈ 50 − 18 log10 f, (4) where the

  7. A lifelong learning hyper-heuristic method for bin packing.

    PubMed

    Sim, Kevin; Hart, Emma; Paechter, Ben

    2015-01-01

    We describe a novel hyper-heuristic system that continuously learns over time to solve a combinatorial optimisation problem. The system continuously generates new heuristics and samples problems from its environment; and representative problems and heuristics are incorporated into a self-sustaining network of interacting entities inspired by methods in artificial immune systems. The network is plastic in both its structure and content, leading to the following properties: it exploits existing knowledge captured in the network to rapidly produce solutions; it can adapt to new problems with widely differing characteristics; and it is capable of generalising over the problem space. The system is tested on a large corpus of 3,968 new instances of 1D bin-packing problems as well as on 1,370 existing problems from the literature; it shows excellent performance in terms of the quality of solutions obtained across the datasets and in adapting to dynamically changing sets of problem instances compared to previous approaches. As the network self-adapts to sustain a minimal repertoire of both problems and heuristics that form a representative map of the problem space, the system is further shown to be computationally efficient and therefore scalable.

  8. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  9. An application of programmatic assessment for learning (PAL) system for general practice training.

    PubMed

    Schuwirth, Lambert; Valentine, Nyoli; Dilena, Paul

    2017-01-01

    Aim: Programmatic assessment for learning (PAL) is becoming more and more popular as a concept but its implementation is not without problems. In this paper we describe the design principles behind a PAL program in a general practice training context. Design principles: The PAL program was designed to optimise the meaningfulness of assessment information for the registrar and to make him/her use that information to self regulate their learning. The main principles in the program were cognitivist and transformative. The main cognitive principles we used were fostering the understanding of deep structures and stimulating transfer by making registrars constantly connect practice experiences with background knowledge. Ericsson's deliberate practice approach was built in with regard to the provision of feedback combined with Pintrich's model of self regulation. Mezirow's transformative learning and insights from social network theory on collaborative learning were used to support the registrars in their development to become GP professionals. Finally the principal of test enhanced learning was optimised. Epilogue: We have provided this example explain the design decisions behind our program, but not want to present our program as the solution to any given situation.

  10. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  11. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  12. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  13. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. QML-AiNet: An immune network approach to learning qualitative differential equation models

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper, we explore the application of Opt-AiNet, an immune network approach for search and optimisation problems, to learning qualitative models in the form of qualitative differential equations. The Opt-AiNet algorithm is adapted to qualitative model learning problems, resulting in the proposed system QML-AiNet. The potential of QML-AiNet to address the scalability and multimodal search space issues of qualitative model learning has been investigated. More importantly, to further improve the efficiency of QML-AiNet, we also modify the mutation operator according to the features of discrete qualitative model space. Experimental results show that the performance of QML-AiNet is comparable to QML-CLONALG, a QML system using the clonal selection algorithm (CLONALG). More importantly, QML-AiNet with the modified mutation operator can significantly improve the scalability of QML and is much more efficient than QML-CLONALG. PMID:25648212

  15. An Approach to Automated Fusion System Design and Adaptation

    PubMed Central

    Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker

    2017-01-01

    Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum. PMID:28300762

  16. QML-AiNet: An immune network approach to learning qualitative differential equation models.

    PubMed

    Pang, Wei; Coghill, George M

    2015-02-01

    In this paper, we explore the application of Opt-AiNet, an immune network approach for search and optimisation problems, to learning qualitative models in the form of qualitative differential equations. The Opt-AiNet algorithm is adapted to qualitative model learning problems, resulting in the proposed system QML-AiNet. The potential of QML-AiNet to address the scalability and multimodal search space issues of qualitative model learning has been investigated. More importantly, to further improve the efficiency of QML-AiNet, we also modify the mutation operator according to the features of discrete qualitative model space. Experimental results show that the performance of QML-AiNet is comparable to QML-CLONALG, a QML system using the clonal selection algorithm (CLONALG). More importantly, QML-AiNet with the modified mutation operator can significantly improve the scalability of QML and is much more efficient than QML-CLONALG.

  17. An Approach to Automated Fusion System Design and Adaptation.

    PubMed

    Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker

    2017-03-16

    Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum.

  18. Computer software tool REALM for sustainable water allocation and management.

    PubMed

    Perera, B J C; James, B; Kularathna, M D U

    2005-12-01

    REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.

  19. Recruitment and retention of young women into nutrition research studies: practical considerations.

    PubMed

    Leonard, Alecia; Hutchesson, Melinda; Patterson, Amanda; Chalmers, Kerry; Collins, Clare

    2014-01-16

    Successful recruitment and retention of participants into research studies is critical for optimising internal and external validity. Research into diet and lifestyle of young women is important due to the physiological transitions experienced at this life stage. This paper aims to evaluate data related to recruitment and retention across three research studies with young women, and present practical advice related to recruiting and retaining young women in order to optimise study quality within nutrition research. Recruitment and retention strategies used in three nutrition studies that targeted young women (18 to 35 years) were critiqued. A randomised controlled trial (RCT), a crossover validation study and a cross-sectional survey were conducted at the University of Newcastle, Australia between 2010 and 2013Successful recruitment was defined as maximum recruitment relative to time. Retention was assessed as maximum participants remaining enrolled at study completion. Recruitment approaches included notice boards, web and social network sites (Facebook and Twitter), with social media most successful in recruitment. The online survey had the highest recruitment in the shortest time-frame (751 participants in one month). Email, phone and text message were used in study one (RCT) and study two (crossover validation) and assisted in low attrition rates, with 93% and 75.7% completing the RCT and crossover validation study respectively. Of those who did not complete the RCT, reported reasons were: being too busy; and having an unrelated illness. Recruiting young women into nutrition research is challenging. Use of social media enhances recruitment, while Email, phone and text message contact improves retention within interventions. Further research comparing strategies to optimise recruitment and retention in young women, including flexible testing times, reminders and incentives is warranted.

  20. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  1. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  2. Development and optimisation of atorvastatin calcium loaded self-nanoemulsifying drug delivery system (SNEDDS) for enhancing oral bioavailability: in vitro and in vivo evaluation.

    PubMed

    Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M

    2017-05-01

    The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.

  3. H∞ output tracking control of uncertain and disturbed nonlinear systems based on neural network model

    NASA Astrophysics Data System (ADS)

    Li, Chengcheng; Li, Yuefeng; Wang, Guanglin

    2017-07-01

    The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.

  4. Data interoperabilty between European Environmental Research Infrastructures and their contribution to global data networks

    NASA Astrophysics Data System (ADS)

    Kutsch, W. L.; Zhao, Z.; Hardisty, A.; Hellström, M.; Chin, Y.; Magagna, B.; Asmi, A.; Papale, D.; Pfeil, B.; Atkinson, M.

    2017-12-01

    Environmental Research Infrastructures (ENVRIs) are expected to become important pillars not only for supporting their own scientific communities, but also a) for inter-disciplinary research and b) for the European Earth Observation Program Copernicus as a contribution to the Global Earth Observation System of Systems (GEOSS) or global thematic data networks. As such, it is very important that data-related activities of the ENVRIs will be well integrated. This requires common policies, models and e-infrastructure to optimise technological implementation, define workflows, and ensure coordination, harmonisation, integration and interoperability of data, applications and other services. The key is interoperating common metadata systems (utilising a richer metadata model as the `switchboard' for interoperation with formal syntax and declared semantics). The metadata characterises data, services, users and ICT resources (including sensors and detectors). The European Cluster Project ENVRIplus has developed a reference model (ENVRI RM) for common data infrastructure architecture to promote interoperability among ENVRIs. The presentation will provide an overview of recent progress and give examples for the integration of ENVRI data in global integration networks.

  5. Quantification of groundwater infiltration and surface water inflows in urban sewer networks based on a multiple model approach.

    PubMed

    Karpf, Christian; Krebs, Peter

    2011-05-01

    The management of sewer systems requires information about discharge and variability of typical wastewater sources in urban catchments. Especially the infiltration of groundwater and the inflow of surface water (I/I) are important for making decisions about the rehabilitation and operation of sewer networks. This paper presents a methodology to identify I/I and estimate its quantity. For each flow fraction in sewer networks, an individual model approach is formulated whose parameters are optimised by the method of least squares. This method was applied to estimate the contributions to the wastewater flow in the sewer system of the City of Dresden (Germany), where data availability is good. Absolute flows of I/I and their temporal variations are estimated. Further information on the characteristics of infiltration is gained by clustering and grouping sewer pipes according to the attributes construction year and groundwater influence and relating these resulting classes to infiltration behaviour. Further, it is shown that condition classes based on CCTV-data can be used to estimate the infiltration potential of sewer pipes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Quantification of fructo-oligosaccharides based on the evaluation of oligomer ratios using an artificial neural network.

    PubMed

    Onofrejová, Lucia; Farková, Marta; Preisler, Jan

    2009-04-13

    The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as (18)O, (15)N or (13)C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.

  7. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  8. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  9. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  10. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  11. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  12. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  13. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  14. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  15. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  16. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  17. An approximate dynamic programming approach to resource management in multi-cloud scenarios

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo

    2017-03-01

    The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.

  18. Optimisation of insect cell growth in deep-well blocks: development of a high-throughput insect cell expression screen.

    PubMed

    Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian

    2005-01-01

    This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.

  19. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  20. Receptive field optimisation and supervision of a fuzzy spiking neural network.

    PubMed

    Glackin, Cornelius; Maguire, Liam; McDaid, Liam; Sayers, Heather

    2011-04-01

    This paper presents a supervised training algorithm that implements fuzzy reasoning on a spiking neural network. Neuron selectivity is facilitated using receptive fields that enable individual neurons to be responsive to certain spike train firing rates and behave in a similar manner as fuzzy membership functions. The connectivity of the hidden and output layers in the fuzzy spiking neural network (FSNN) is representative of a fuzzy rule base. Fuzzy C-Means clustering is utilised to produce clusters that represent the antecedent part of the fuzzy rule base that aid classification of the feature data. Suitable cluster widths are determined using two strategies; subjective thresholding and evolutionary thresholding respectively. The former technique typically results in compact solutions in terms of the number of neurons, and is shown to be particularly suited to small data sets. In the latter technique a pool of cluster candidates is generated using Fuzzy C-Means clustering and then a genetic algorithm is employed to select the most suitable clusters and to specify cluster widths. In both scenarios, the network is supervised but learning only occurs locally as in the biological case. The advantages and disadvantages of the network topology for the Fisher Iris and Wisconsin Breast Cancer benchmark classification tasks are demonstrated and directions of current and future work are discussed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  2. Analysis of Gene Regulatory Networks of Maize in Response to Nitrogen.

    PubMed

    Jiang, Lu; Ball, Graham; Hodgman, Charlie; Coules, Anne; Zhao, Han; Lu, Chungui

    2018-03-08

    Nitrogen (N) fertilizer has a major influence on the yield and quality. Understanding and optimising the response of crop plants to nitrogen fertilizer usage is of central importance in enhancing food security and agricultural sustainability. In this study, the analysis of gene regulatory networks reveals multiple genes and biological processes in response to N. Two microarray studies have been used to infer components of the nitrogen-response network. Since they used different array technologies, a map linking the two probe sets to the maize B73 reference genome has been generated to allow comparison. Putative Arabidopsis homologues of maize genes were used to query the Biological General Repository for Interaction Datasets (BioGRID) network, which yielded the potential involvement of three transcription factors (TFs) (GLK5, MADS64 and bZIP108) and a Calcium-dependent protein kinase. An Artificial Neural Network was used to identify influential genes and retrieved bZIP108 and WRKY36 as significant TFs in both microarray studies, along with genes for Asparagine Synthetase, a dual-specific protein kinase and a protein phosphatase. The output from one study also suggested roles for microRNA (miRNA) 399b and Nin-like Protein 15 (NLP15). Co-expression-network analysis of TFs with closely related profiles to known Nitrate-responsive genes identified GLK5, GLK8 and NLP15 as candidate regulators of genes repressed under low Nitrogen conditions, while bZIP108 might play a role in gene activation.

  3. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  4. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  5. Optimising UK urban road verge contributions to biodiversity and ecosystem services with cost-effective management.

    PubMed

    O'Sullivan, Odhran S; Holt, Alison R; Warren, Philip H; Evans, Karl L

    2017-04-15

    Urban road verges can contain significant biodiversity, contribute to structural connectivity between other urban greenspaces, and due to their proximity to road traffic are well placed to provide ecosystem services. Using the UK as a case study we review and critically evaluate a broad range of evidence to assess how this considerable potential can be enhanced despite financial, contractual and public opinion constraints. Reduced mowing frequency and other alterations would enhance biodiversity, aesthetics and pollination services, whilst delivering costs savings and potentially being publically acceptable. Retaining mature trees and planting additional ones is favourable to residents and would enhance biodiversity, pollution and climate regulation, carbon storage, and stormwater management. Optimising these services requires improved selection of tree species, and creating a more diverse tree stock. Due to establishment costs additional tree planting and maintenance could benefit from payment for ecosystem service schemes. Verges could also provide areas for cultivation of biofuels and possibly food production. Maximising the contribution of verges to urban biodiversity and ecosystem services is economical and becoming an increasingly urgent priority as the road network expands and other urban greenspace is lost, requiring enhancement of existing greenspace to facilitate sustainable urban development. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Emergence of scale-free characteristics in socio-ecological systems with bounded rationality

    PubMed Central

    Kasthurirathna, Dharshana; Piraveenan, Mahendra

    2015-01-01

    Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems. PMID:26065713

  7. Emergence of scale-free characteristics in socio-ecological systems with bounded rationality.

    PubMed

    Kasthurirathna, Dharshana; Piraveenan, Mahendra

    2015-06-11

    Socio-ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback--Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio-ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.

  8. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  9. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  10. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  11. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  12. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  13. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  14. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  15. On damage diagnosis for a wind turbine blade using pattern recognition

    NASA Astrophysics Data System (ADS)

    Dervilis, N.; Choi, M.; Taylor, S. G.; Barthorpe, R. J.; Park, G.; Farrar, C. R.; Worden, K.

    2014-03-01

    With the increased interest in implementation of wind turbine power plants in remote areas, structural health monitoring (SHM) will be one of the key cards in the efficient establishment of wind turbines in the energy arena. Detection of blade damage at an early stage is a critical problem, as blade failure can lead to a catastrophic outcome for the entire wind turbine system. Experimental measurements from vibration analysis were extracted from a 9 m CX-100 blade by researchers at Los Alamos National Laboratory (LANL) throughout a full-scale fatigue test conducted at the National Renewable Energy Laboratory (NREL) and National Wind Technology Center (NWTC). The blade was harmonically excited at its first natural frequency using a Universal Resonant EXcitation (UREX) system. In the current study, machine learning algorithms based on Artificial Neural Networks (ANNs), including an Auto-Associative Neural Network (AANN) based on a standard ANN form and a novel approach to auto-association with Radial Basis Functions (RBFs) networks are used, which are optimised for fast and efficient runs. This paper introduces such pattern recognition methods into the wind energy field and attempts to address the effectiveness of such methods by combining vibration response data with novelty detection techniques.

  16. Decision Network for Blue Green Solutions to Influence Policy Impact Assessments

    NASA Astrophysics Data System (ADS)

    Mijic, A.; Theodoropoulos, G.; El Hattab, M. H.; Brown, K.

    2017-12-01

    Sustainable Urban Drainage Systems (SuDS) deliver ecosystems services that can potentially yield multiple benefits to the urban environment. These benefits can be achieved through optimising SUDS' integration with the local environment and water resources, creating so-called Blue Green Solutions (BGS). The BGS paradigm, however, presents several challenges, in particular quantifying the benefits and creating the scientific evidence-base that can persuade high-level decision-makers and stakeholders to implement BGS at large scale. This work presents the development of the easily implemented and tailored-made approach that allows a robust assessment of the BGS co-benefits, and can influence the types of information that are included in policy impact assessments. The Analytic Network Process approach is used to synthesise the available evidence on the co-benefits of the BGS. The approach enables mapping the interactions between individual BGS selection criteria, and creates a platform to assess the synergetic benefits that arise from components interactions. By working with Government departments and other public and private sector stakeholders, this work has produced a simple decision criteria-based network that will enable the co-benefits and trade-offs of BGS to be quantified and integrated into UK policy appraisals.

  17. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  18. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  19. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  20. Jump state estimation with multiple sensors with packet dropping and delaying channels

    NASA Astrophysics Data System (ADS)

    Dolz, Daniel; Peñarrocha, Ignacio; Sanchis, Roberto

    2016-03-01

    This work addresses the design of a state observer for systems whose outputs are measured through a communication network. The measurements from each sensor node are assumed to arrive randomly, scarcely and with a time-varying delay. The proposed model of the plant and the network measurement scenarios cover the cases of multiple sensors, out-of-sequence measurements, buffered measurements on a single packet and multirate sensor measurements. A jump observer is proposed that selects a different gain depending on the number of periods elapsed between successfully received measurements and on the available data. A finite set of gains is pre-calculated offline with a tractable optimisation problem, where the complexity of the observer implementation is a design parameter. The computational cost of the observer implementation is much lower than in the Kalman filter, whilst the performance is similar. Several examples illustrate the observer design for different measurement scenarios and observer complexity and show the achievable performance.

  1. A Novel approach for predicting monthly water demand by combining singular spectrum analysis with neural networks

    NASA Astrophysics Data System (ADS)

    Zubaidi, Salah L.; Dooley, Jayne; Alkhaddar, Rafid M.; Abdellatif, Mawada; Al-Bugharbee, Hussein; Ortega-Martorell, Sandra

    2018-06-01

    Valid and dependable water demand prediction is a major element of the effective and sustainable expansion of municipal water infrastructures. This study provides a novel approach to quantifying water demand through the assessment of climatic factors, using a combination of a pretreatment signal technique, a hybrid particle swarm optimisation algorithm and an artificial neural network (PSO-ANN). The Singular Spectrum Analysis (SSA) technique was adopted to decompose and reconstruct water consumption in relation to six weather variables, to create a seasonal and stochastic time series. The results revealed that SSA is a powerful technique, capable of decomposing the original time series into many independent components including trend, oscillatory behaviours and noise. In addition, the PSO-ANN algorithm was shown to be a reliable prediction model, outperforming the hybrid Backtracking Search Algorithm BSA-ANN in terms of fitness function (RMSE). The findings of this study also support the view that water demand is driven by climatological variables.

  2. Computational modelling of genome-scale metabolic networks and its application to CHO cell cultures.

    PubMed

    Rejc, Živa; Magdevska, Lidija; Tršelič, Tilen; Osolin, Timotej; Vodopivec, Rok; Mraz, Jakob; Pavliha, Eva; Zimic, Nikolaj; Cvitanović, Tanja; Rozman, Damjana; Moškon, Miha; Mraz, Miha

    2017-09-01

    Genome-scale metabolic models (GEMs) have become increasingly important in recent years. Currently, GEMs are the most accurate in silico representation of the genotype-phenotype link. They allow us to study complex networks from the systems perspective. Their application may drastically reduce the amount of experimental and clinical work, improve diagnostic tools and increase our understanding of complex biological phenomena. GEMs have also demonstrated high potential for the optimisation of bio-based production of recombinant proteins. Herein, we review the basic concepts, methods, resources and software tools used for the reconstruction and application of GEMs. We overview the evolution of the modelling efforts devoted to the metabolism of Chinese Hamster Ovary (CHO) cells. We present a case study on CHO cell metabolism under different amino acid depletions. This leads us to the identification of the most influential as well as essential amino acids in selected CHO cell lines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  4. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  5. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  6. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  7. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  8. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  9. Pricing, manufacturing and inventory policies for raw material in a three-level supply chain

    NASA Astrophysics Data System (ADS)

    Allah Taleizadeh, Ata; Noori-daryan, Mahsa

    2016-03-01

    We studied a decentralised three-layer supply chain including a supplier, a producer and some retailers. All the retailers order their demands to the producer and the producer order his demands to the supplier. We assumed that the demand is price sensitive and shortage is not permitted. The goal of the paper is to optimise the total cost of the supply chain network by coordinating decision-making policy using Stackelberg-Nash equilibrium. The decision variables of our model are the supplier's price, the producer's price and the number of shipments received by the supplier and producer, respectively. To illustrate the applicability of the proposed model numerical examples are presented.

  10. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  11. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  12. Data-driven nonlinear optimisation of a simple air pollution dispersion model generating high resolution spatiotemporal exposure

    NASA Astrophysics Data System (ADS)

    Yuval; Bekhor, Shlomo; Broday, David M.

    2013-11-01

    Spatially detailed estimation of exposure to air pollutants in the urban environment is needed for many air pollution epidemiological studies. To benefit studies of acute effects of air pollution such exposure maps are required at high temporal resolution. This study introduces nonlinear optimisation framework that produces high resolution spatiotemporal exposure maps. An extensive traffic model output, serving as proxy for traffic emissions, is fitted via a nonlinear model embodying basic dispersion properties, to high temporal resolution routine observations of traffic-related air pollutant. An optimisation problem is formulated and solved at each time point to recover the unknown model parameters. These parameters are then used to produce a detailed concentration map of the pollutant for the whole area covered by the traffic model. Repeating the process for multiple time points results in the spatiotemporal concentration field. The exposure at any location and for any span of time can then be computed by temporal integration of the concentration time series at selected receptor locations for the durations of desired periods. The methodology is demonstrated for NO2 exposure using the output of a traffic model for the greater Tel Aviv area, Israel, and the half-hourly monitoring and meteorological data from the local air quality network. A leave-one-out cross-validation resulted in simulated half-hourly concentrations that are almost unbiased compared to the observations, with a mean error (ME) of 5.2 ppb, normalised mean error (NME) of 32%, 78% of the simulated values are within a factor of two (FAC2) of the observations, and the coefficient of determination (R2) is 0.6. The whole study period integrated exposure estimations are also unbiased compared with their corresponding observations, with ME of 2.5 ppb, NME of 18%, FAC2 of 100% and R2 that equals 0.62.

  13. Local pursuit strategy-inspired cooperative trajectory planning algorithm for a class of nonlinear constrained dynamical systems

    NASA Astrophysics Data System (ADS)

    Xu, Yunjun; Remeikas, Charles; Pham, Khanh

    2014-03-01

    Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.

  14. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  15. DiME: A Scalable Disease Module Identification Algorithm with Application to Glioma Progression

    PubMed Central

    Liu, Yunpeng; Tennant, Daniel A.; Zhu, Zexuan; Heath, John K.; Yao, Xin; He, Shan

    2014-01-01

    Disease module is a group of molecular components that interact intensively in the disease specific biological network. Since the connectivity and activity of disease modules may shed light on the molecular mechanisms of pathogenesis and disease progression, their identification becomes one of the most important challenges in network medicine, an emerging paradigm to study complex human disease. This paper proposes a novel algorithm, DiME (Disease Module Extraction), to identify putative disease modules from biological networks. We have developed novel heuristics to optimise Community Extraction, a module criterion originally proposed for social network analysis, to extract topological core modules from biological networks as putative disease modules. In addition, we have incorporated a statistical significance measure, B-score, to evaluate the quality of extracted modules. As an application to complex diseases, we have employed DiME to investigate the molecular mechanisms that underpin the progression of glioma, the most common type of brain tumour. We have built low (grade II) - and high (GBM) - grade glioma co-expression networks from three independent datasets and then applied DiME to extract potential disease modules from both networks for comparison. Examination of the interconnectivity of the identified modules have revealed changes in topology and module activity (expression) between low- and high- grade tumours, which are characteristic of the major shifts in the constitution and physiology of tumour cells during glioma progression. Our results suggest that transcription factors E2F4, AR and ETS1 are potential key regulators in tumour progression. Our DiME compiled software, R/C++ source code, sample data and a tutorial are available at http://www.cs.bham.ac.uk/~szh/DiME. PMID:24523864

  16. Using Complexity and Network Concepts to Inform Healthcare Knowledge Translation

    PubMed Central

    Kitson, Alison; Brook, Alan; Harvey, Gill; Jordan, Zoe; Marshall, Rhianon; O’Shea, Rebekah; Wilson, David

    2018-01-01

    Many representations of the movement of healthcare knowledge through society exist, and multiple models for the translation of evidence into policy and practice have been articulated. Most are linear or cyclical and very few come close to reflecting the dense and intricate relationships, systems and politics of organizations and the processes required to enact sustainable improvements. We illustrate how using complexity and network concepts can better inform knowledge translation (KT) and argue that changing the way we think and talk about KT could enhance the creation and movement of knowledge throughout those systems needing to develop and utilise it. From our theoretical refinement, we propose that KT is a complex network composed of five interdependent sub-networks, or clusters, of key processes (problem identification [PI], knowledge creation [KC], knowledge synthesis [KS], implementation [I], and evaluation [E]) that interact dynamically in different ways at different times across one or more sectors (community; health; government; education; research for example). We call this the KT Complexity Network, defined as a network that optimises the effective, appropriate and timely creation and movement of knowledge to those who need it in order to improve what they do. Activation within and throughout any one of these processes and systems depends upon the agents promoting the change, successfully working across and between multiple systems and clusters. The case is presented for moving to a way of thinking about KT using complexity and network concepts. This extends the thinking that is developing around integrated KT approaches. There are a number of policy and practice implications that need to be considered in light of this shift in thinking. PMID:29524952

  17. Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project

    DTIC Science & Technology

    2005-05-01

    thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of

  18. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  19. Exploring pig trade patterns to inform the design of risk-based disease surveillance and control strategies

    PubMed Central

    Guinat, C.; Relun, A.; Wall, B.; Morris, A.; Dixon, L.; Pfeiffer, D. U.

    2016-01-01

    An understanding of the patterns of animal contact networks provides essential information for the design of risk-based animal disease surveillance and control strategies. This study characterises pig movements throughout England and Wales between 2009 and 2013 with a view to characterising spatial and temporal patterns, network topology and trade communities. Data were extracted from the Animal and Plant Health Agency (APHA)’s RADAR (Rapid Analysis and Detection of Animal-related Risks) database, and analysed using descriptive and network approaches. A total of 61,937,855 pigs were moved through 872,493 movements of batches in England and Wales during the 5-year study period. Results show that the network exhibited scale-free and small-world topologies, indicating the potential for diseases to quickly spread within the pig industry. The findings also provide suggestions for how risk-based surveillance strategies could be optimised in the country by taking account of highly connected holdings, geographical regions and time periods with the greatest number of movements and pigs moved, as these are likely to be at higher risk for disease introduction. This study is also the first attempt to identify trade communities in the country, information which could be used to facilitate the pig trade and maintain disease-free status across the country in the event of an outbreak. PMID:27357836

  20. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  1. Machine learning for outcome prediction of acute ischemic stroke post intra-arterial therapy.

    PubMed

    Asadi, Hamed; Dowling, Richard; Yan, Bernard; Mitchell, Peter

    2014-01-01

    Stroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke. We conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data. We included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408). We showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.

  2. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  3. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  4. Optimisation of active suspension control inputs for improved vehicle handling performance

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor

    2016-11-01

    Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.

  5. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  6. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  7. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  8. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    NASA Astrophysics Data System (ADS)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  9. Optimisation techniques in vaginal cuff brachytherapy.

    PubMed

    Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A

    2009-11-01

    The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.

  10. Effect of preventive (beta blocker) treatment, behavioural migraine management, or their combination on outcomes of optimised acute treatment in frequent migraine: randomised controlled trial.

    PubMed

    Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina

    2010-09-29

    To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.

  11. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  12. Fields and coupling between coils embedded in conductive environments

    NASA Astrophysics Data System (ADS)

    Chu, Son; Vallecchi, Andrea; Stevens, Christopher J.; Shamonina, Ekaterina

    2018-02-01

    An approximate solution is developed for the mutual inductance of two circular coils enclosed by insulating cavities in a conducting medium. This solution is used to investigate the variation of the mutual inductance upon the conductivity of the background (e.g., soil, seawater or human body), as well as upon other parameters such as the vertical of the coils and the displacement of one of the coils in the horizontal plane. Our theoretical results are compared with full wave simulations and a previous solution valid when a conductive slab is inserted between two coupled resonant coils. The proposed approach can have direct impact on the design and optimisation of magnetoinductive waveguides and wireless power transfer for underground/underwater networks and embedded biomedical systems.

  13. Asset deterioration and discolouration in water distribution systems.

    PubMed

    Husband, P S; Boxall, J B

    2011-01-01

    Water Distribution Systems function to supply treated water safe for human consumption and complying with increasingly stringent quality regulations. Considered primarily an aesthetic issue, discolouration is the largest cause of customer dissatisfaction associated with distribution system water quality. Pro-active measures to prevent discolouration are sought yet network processes remain insufficiently understood to fully justify and optimise capital or operational strategies to manage discolouration risk. Results are presented from a comprehensive fieldwork programme in UK water distribution networks that have determined asset deterioration with respect to discolouration. This is achieved by quantification of material accumulating as cohesive layers on pipe surfaces that when mobilised are acknowledged as the primary cause of discolouration. It is shown that these material layers develop ubiquitously with defined layer strength characteristics and at a consistent and repeatable rate dependant on water quality. For UK networks iron concentration in the bulk water is shown as a potential indicator of deterioration rate. With material layer development rates determined, management decisions that balance discolouration risk and expenditure to maintain water quality integrity can be justified. In particular the balance between capital investment such as improving water treatment output or pipe renewal and operational expenditure such as the frequency of network maintenance through flushing may be judged. While the rate of development is shown to be a function of water quality, the magnitude (peak or average turbidity) of discolouration incidents is shown to be dominated by hydraulic conditions. From this it can be proposed that network hydraulic management, such as regular periodic 'stressing', is a potential strategy in reducing discolouration risk. The ultimate application of this is the hydraulic design of self-cleaning networks to maintain discolouration risk below acceptable levels. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  15. Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.

    PubMed

    Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne

    2017-01-01

    Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.

  16. To connect or not to connect? Modelling the optimal degree of centralisation for wastewater infrastructures.

    PubMed

    Eggimann, Sven; Truffer, Bernhard; Maurer, Max

    2015-11-01

    The strong reliance of most utility services on centralised network infrastructures is becoming increasingly challenged by new technological advances in decentralised alternatives. However, not enough effort has been made to develop planning tools designed to address the implications of these new opportunities and to determine the optimal degree of centralisation of these infrastructures. We introduce a planning tool for sustainable network infrastructure planning (SNIP), a two-step techno-economic heuristic modelling approach based on shortest path-finding and hierarchical-agglomerative clustering algorithms to determine the optimal degree of centralisation in the field of wastewater management. This SNIP model optimises the distribution of wastewater treatment plants and the sewer network outlay relative to several cost and sewer-design parameters. Moreover, it allows us to construct alternative optimal wastewater system designs taking into account topography, economies of scale as well as the full size range of wastewater treatment plants. We quantify and confirm that the optimal degree of centralisation decreases with increasing terrain complexity and settlement dispersion while showing that the effect of the latter exceeds that of topography. Case study results for a Swiss community indicate that the calculated optimal degree of centralisation is substantially lower than the current level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  18. A new web-based modelling tool (Websim-MILQ) aimed at optimisation of thermal treatments in the dairy industry.

    PubMed

    Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P

    2008-11-30

    In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.

  19. A brief understanding of process optimisation in microwave-assisted extraction of botanical materials: options and opportunities with chemometric tools.

    PubMed

    Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C

    2014-01-01

    Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Models of Acetylcholine and Dopamine Signals Differentially Improve Neural Representations

    PubMed Central

    Holca-Lamarre, Raphaël; Lücke, Jörg; Obermayer, Klaus

    2017-01-01

    Biological and artificial neural networks (ANNs) represent input signals as patterns of neural activity. In biology, neuromodulators can trigger important reorganizations of these neural representations. For instance, pairing a stimulus with the release of either acetylcholine (ACh) or dopamine (DA) evokes long lasting increases in the responses of neurons to the paired stimulus. The functional roles of ACh and DA in rearranging representations remain largely unknown. Here, we address this question using a Hebbian-learning neural network model. Our aim is both to gain a functional understanding of ACh and DA transmission in shaping biological representations and to explore neuromodulator-inspired learning rules for ANNs. We model the effects of ACh and DA on synaptic plasticity and confirm that stimuli coinciding with greater neuromodulator activation are over represented in the network. We then simulate the physiological release schedules of ACh and DA. We measure the impact of neuromodulator release on the network's representation and on its performance on a classification task. We find that ACh and DA trigger distinct changes in neural representations that both improve performance. The putative ACh signal redistributes neural preferences so that more neurons encode stimulus classes that are challenging for the network. The putative DA signal adapts synaptic weights so that they better match the classes of the task at hand. Our model thus offers a functional explanation for the effects of ACh and DA on cortical representations. Additionally, our learning algorithm yields performances comparable to those of state-of-the-art optimisation methods in multi-layer perceptrons while requiring weaker supervision signals and interacting with synaptically-local weight updates. PMID:28690509

  1. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  2. Computational aero-acoustics for fan duct propagation and radiation. Current status and application to turbofan liner optimisation

    NASA Astrophysics Data System (ADS)

    Astley, R. J.; Sugimoto, R.; Mustafi, P.

    2011-08-01

    Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.

  3. Concurrent enterprise: a conceptual framework for enterprise supply-chain network activities

    NASA Astrophysics Data System (ADS)

    Addo-Tenkorang, Richard; Helo, Petri T.; Kantola, Jussi

    2017-04-01

    Supply-chain management (SCM) in manufacturing industries has evolved significantly over the years. Recently, a lot more relevant research has picked up on the development of integrated solutions. Thus, seeking a collaborative optimisation of geographical, just-in-time (JIT), quality (customer demand/satisfaction) and return-on-investment (profits), aspects of organisational management and planning through 'best practice' business-process management - concepts and application; employing system tools such as certain applications/aspects of enterprise resource planning (ERP) - SCM systems information technology (IT) enablers to enhance enterprise integrated product development/concurrent engineering principles. This article assumed three main organisation theory applications in positioning its assumptions. Thus, proposing a feasible industry-specific framework not currently included within the SCOR model's level four (4) implementation level, as well as other existing SCM integration reference models such as in the MIT process handbook's - Process Interchange Format (PIF), the TOVE project, etc. which could also be replicated in other SCs. However, the wider focus of this paper's contribution will be concentrated on a complimentary proposed framework to the SCC's SCOR reference model. Quantitative empirical closed-ended questionnaires in addition to the main data collected from a qualitative empirical real-life industrial-based pilot case study were used: To propose a conceptual concurrent enterprise framework for SCM network activities. This research adopts a design structure matrix simulation approach analysis to propose an optimal enterprise SCM-networked value-adding, customised master data-management platform/portal for efficient SCM network information exchange and an effective supply-chain (SC) network systems-design teams' structure. Furthermore, social network theory analysis will be employed in a triangulation approach with statistical correlation analysis to assess the scale/level of frequency, importance, level of collaborative-ness, mutual trust as well as roles and responsibility among the enterprise SCM network for systems product development (PD) design teams' technical communication network as well as extensive literature reviews.

  4. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    NASA Astrophysics Data System (ADS)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  5. New machine-learning algorithms for prediction of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Mandal, Indrajit; Sairam, N.

    2014-03-01

    This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.

  6. Ultra-high bandwidth quantum secured data transmission

    PubMed Central

    Dynes, James F.; Tam, Winci W-S.; Plews, Alan; Fröhlich, Bernd; Sharpe, Andrew W.; Lucamarini, Marco; Yuan, Zhiliang; Radig, Christian; Straw, Andrew; Edwards, Tim; Shields, Andrew J.

    2016-01-01

    Quantum key distribution (QKD) provides an attractive means for securing communications in optical fibre networks. However, deployment of the technology has been hampered by the frequent need for dedicated dark fibres to segregate the very weak quantum signals from conventional traffic. Up until now the coexistence of QKD with data has been limited to bandwidths that are orders of magnitude below those commonly employed in fibre optic communication networks. Using an optimised wavelength divisional multiplexing scheme, we transport QKD and the prevalent 100 Gb/s data format in the forward direction over the same fibre for the first time. We show a full quantum encryption system operating with a bandwidth of 200 Gb/s over a 100 km fibre. Exploring the ultimate limits of the technology by experimental measurements of the Raman noise, we demonstrate it is feasible to combine QKD with 10 Tb/s of data over a 50 km link. These results suggest it will be possible to integrate QKD and other quantum photonic technologies into high bandwidth data communication infrastructures, thereby allowing their widespread deployment. PMID:27734921

  7. Mapping uncharted territory in ice from zeolite networks to ice structures.

    PubMed

    Engel, Edgar A; Anelli, Andrea; Ceriotti, Michele; Pickard, Chris J; Needs, Richard J

    2018-06-05

    Ice is one of the most extensively studied condensed matter systems. Yet, both experimentally and theoretically several new phases have been discovered over the last years. Here we report a large-scale density-functional-theory study of the configuration space of water ice. We geometry optimise 74,963 ice structures, which are selected and constructed from over five million tetrahedral networks listed in the databases of Treacy, Deem, and the International Zeolite Association. All prior knowledge of ice is set aside and we introduce "generalised convex hulls" to identify configurations stabilised by appropriate thermodynamic constraints. We thereby rediscover all known phases (I-XVII, i, 0 and the quartz phase) except the metastable ice IV. Crucially, we also find promising candidates for ices XVIII through LI. Using the "sketch-map" dimensionality-reduction algorithm we construct an a priori, navigable map of configuration space, which reproduces similarity relations between structures and highlights the novel candidates. By relating the known phases to the tractably small, yet structurally diverse set of synthesisable candidate structures, we provide an excellent starting point for identifying formation pathways.

  8. Ultra-high bandwidth quantum secured data transmission

    NASA Astrophysics Data System (ADS)

    Dynes, James F.; Tam, Winci W.-S.; Plews, Alan; Fröhlich, Bernd; Sharpe, Andrew W.; Lucamarini, Marco; Yuan, Zhiliang; Radig, Christian; Straw, Andrew; Edwards, Tim; Shields, Andrew J.

    2016-10-01

    Quantum key distribution (QKD) provides an attractive means for securing communications in optical fibre networks. However, deployment of the technology has been hampered by the frequent need for dedicated dark fibres to segregate the very weak quantum signals from conventional traffic. Up until now the coexistence of QKD with data has been limited to bandwidths that are orders of magnitude below those commonly employed in fibre optic communication networks. Using an optimised wavelength divisional multiplexing scheme, we transport QKD and the prevalent 100 Gb/s data format in the forward direction over the same fibre for the first time. We show a full quantum encryption system operating with a bandwidth of 200 Gb/s over a 100 km fibre. Exploring the ultimate limits of the technology by experimental measurements of the Raman noise, we demonstrate it is feasible to combine QKD with 10 Tb/s of data over a 50 km link. These results suggest it will be possible to integrate QKD and other quantum photonic technologies into high bandwidth data communication infrastructures, thereby allowing their widespread deployment.

  9. A new range-free localisation in wireless sensor networks using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Zengfeng; Zhang, Hao; Lu, Tingting; Sun, Yujuan; Liu, Xing

    2018-02-01

    Location information of sensor nodes is of vital importance for most applications in wireless sensor networks (WSNs). This paper proposes a new range-free localisation algorithm using support vector machine (SVM) and polar coordinate system (PCS), LSVM-PCS. In LSVM-PCS, two sets of classes are first constructed based on sensor nodes' polar coordinates. Using the boundaries of the defined classes, the operation region of WSN field is partitioned into a finite number of polar grids. Each sensor node can be localised into one of the polar grids by executing two localisation algorithms that are developed on the basis of SVM classification. The centre of the resident polar grid is then estimated as the location of the sensor node. In addition, a two-hop mass-spring optimisation (THMSO) is also proposed to further improve the localisation accuracy of LSVM-PCS. In THMSO, both neighbourhood information and non-neighbourhood information are used to refine the sensor node location. The results obtained verify that the proposed algorithm provides a significant improvement over existing localisation methods.

  10. Is ICRP guidance on the use of reference levels consistent?

    PubMed

    Hedemann-Jensen, Per; McEwan, Andrew C

    2011-12-01

    In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.

  11. Active Job Monitoring in Pilots

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-12-01

    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.

  12. Learning strategies, study habits and social networking activity of undergraduate medical students.

    PubMed

    Bickerdike, Andrea; O'Deasmhunaigh, Conall; O'Flynn, Siun; O'Tuathaigh, Colm

    2016-07-17

    To determine learning strategies, study habits, and online social networking use of undergraduates at an Irish medical school, and their relationship with academic performance. A cross-sectional study was conducted in Year 2 and final year undergraduate-entry and graduate-entry students at an Irish medical school. Data about participants' demographics and educational background, study habits (including time management), and use of online media was collected using a self-report questionnaire. Participants' learning strategies were measured using the 18-item Approaches to Learning and Studying Inventory (ALSI). Year score percentage was the measure of academic achievement. The association between demographic/educational factors, learning strategies, study habits, and academic achievement was statistically analysed using regression analysis. Forty-two percent of students were included in this analysis (n=376). A last-minute "cramming" time management study strategy was associated with increased use of online social networks. Learning strategies differed between undergraduate- and graduate-entrants, with the latter less likely to adopt a 'surface approach' and more likely adopt a 'study monitoring' approach. Year score percentage was positively correlated with the 'effort management/organised studying' learning style. Poorer academic performance was associated with a poor time management approach to studying ("cramming") and increased use of the 'surface learning' strategy. Our study demonstrates that effort management and organised studying should be promoted, and surface learning discouraged, as part of any effort to optimise academic performance in medical school. Excessive use of social networking contributes to poor study habits, which are associated with reduced academic achievement.

  13. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  14. Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity

    PubMed Central

    Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254

  15. Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.

    PubMed

    Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir

    2014-01-01

    Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  17. Robustness analysis of bogie suspension components Pareto optimised values

    NASA Astrophysics Data System (ADS)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  18. Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)

    NASA Astrophysics Data System (ADS)

    Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan

    2010-05-01

    The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.

  19. On the dynamic rounding-off in analogue and RF optimal circuit sizing

    NASA Astrophysics Data System (ADS)

    Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena

    2014-04-01

    Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.

  20. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  1. Optimisation of the Management of Higher Activity Waste in the UK - 13537

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Ciara; Buckley, Matthew

    2013-07-01

    The Upstream Optioneering project was created in the Nuclear Decommissioning Authority (UK) to support the development and implementation of significant opportunities to optimise activities across all the phases of the Higher Activity Waste management life cycle (i.e. retrieval, characterisation, conditioning, packaging, storage, transport and disposal). The objective of the Upstream Optioneering project is to work in conjunction with other functions within NDA and the waste producers to identify and deliver solutions to optimise the management of higher activity waste. Historically, optimisation may have occurred on aspects of the waste life cycle (considered here to include retrieval, conditioning, treatment, packaging, interimmore » storage, transport to final end state, which may be geological disposal). By considering the waste life cycle as a whole, critical analysis of assumed constraints may lead to cost savings for the UK Tax Payer. For example, it may be possible to challenge the requirements for packaging wastes for disposal to deliver an optimised waste life cycle. It is likely that the challenges faced in the UK are shared in other countries. It is therefore likely that the opportunities identified may also apply elsewhere, with the potential for sharing information to enable value to be shared. (authors)« less

  2. Optimisation of active suspension control inputs for improved performance of active safety systems

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor

    2018-01-01

    A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.

  3. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  4. A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges

    PubMed Central

    Asgari, B.; Osman, S. A.; Adnan, A.

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400

  5. A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.

    PubMed

    Asgari, B; Osman, S A; Adnan, A

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.

  6. Optimisation and validation of a rapid and efficient microemulsion liquid chromatographic (MELC) method for the determination of paracetamol (acetaminophen) content in a suppository formulation.

    PubMed

    McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin

    2007-05-09

    A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.

  7. Optimisation of the supercritical extraction of toxic elements in fish oil.

    PubMed

    Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M

    2014-01-01

    This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.

  8. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  9. Analysis of changes in cancer health care system in Poland since the socio-economic transformation in 1989

    PubMed

    Dudek-Godeau, Dorota; Kieszkowska-Grudny, Anna; Kwiatkowska, Katarzyna; Bogusz, Joanna; Wysocki, Mirosław J; Bielska-Lasota, Magdalena

    The transformation period in Poland is associated with a set of factors seen as ‘socio-economic stress’, which unfavourably influenced cancer treatment and slowed down the progress of the Polish cancer care in the 90’s. These outcomes in many aspects of cancer care may be experienced till today. The results of the international EUROCARE and CONCORD studies based on European data prove evidence that there is a substantial potential for improvement of low 5-year survival rates in Poland. Since high survivals are related to notably efficient health care system, therefore, to improve organization and treatment methods seems to be one of the most important directions of change in the Polish health care system. Till today, cancer care in Poland is based on a network outlined by Professor Koszarowski in the middle of the last century, and is a solid foundation for the contemporary project of the Comprehensive Cancer Care Network (CCCN) proposed in the frame of CanCon Project. Analysis of the structure of health care system and the changes introduced within the network of oncology in Poland since the beginning of the post-commuinist socio-economic transformation in 1989. This study was conducted based on the CanCon methods aimed at reviewing specialist literature and collecting meaningful experiences of European countries in cancer care, including the main legal regulations. The analysis provided evidence that the political situation and the economic crisis of the Transformation period disintegrated the cancer care and resulted in low 5-year survival rates. A step forward in increasing efficiency of the cancer treatment care was a proposal of the ’Quick Oncological Therapy’ together with one more attempt to organize a CCCN. With this paper the Authors contribute to the CanCon Project by exploration, analysis and discussion of the cancer network in Poland as an example of existing net-like structures in Europe as well as by preparation of guidelines for constructing a contemporary CCCN. (1) ‘Socio-economic’ stress adversely affected the efficiency of oncological treatment, both by reducing safety and slowing down the development of modern oncology. (2) Changing the current system into the contemporary form - CCCN could be an important step forward to optimise the oncological health care in Poland. (3) Introduction of the mandatory monitoring of organizational changes with the use of health standardized indicators could allow for the assessment of the effectiveness of implemented solutions and their impact on better prognosis for cancer patients. (4) Optimising the organization of the health care system is possible only by implementing necessary legislative corrections.

  10. Evolving aerodynamic airfoils for wind turbines through a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI

    2017-01-01

    Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.

  11. Exemples d’utilisation des techniques d’optimisation en calcul de structures de reacteurs

    DTIC Science & Technology

    2003-03-01

    34~ optimisation g~om~trique (architecture fig~e) A la difference du secteur automobile et des avionneurs, la plupart des composants des r~acteurs n...utilise des lois de comportement mat~riaux non lin~aires ainsi que des hypotheses de grands d~placements. L𔄀tude d’optimisation consiste ý minimiser...un disque simple et d~cid6 de s~lectionner trois param~tes qui influent sur la rupture : 1𔄀paisseur de la toile du disque ElI, la hauteur L3 et la

  12. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  13. Least cost pathways to a low carbon electricity system for Australia: impacts of transmission augmentation and extension

    NASA Astrophysics Data System (ADS)

    Dargaville, R. J.

    2016-12-01

    Designing the pathway to a low carbon energy system is complex, requiring consideration of the variable nature of renewables at the hourly timescale, emission intensity and ramp rate constraints of dispatchable technologies (both fossil and renewable) and transmission and distribution network limitations. In this work, an optimization framework taking into account these considerations has been applied to find the lowest cost ways to reduce carbon emissions by either 80% or 100% in 2050 while keeping the system operating reliably along the way. Technologies included are existing and advanced coal and gas technologies (with and without carbon capture and storage), rooftop PV, utility scale PV, concentrating solar thermal, hydro with and without pumped storage, bioenergy, and nuclear. In this study we also also the optimisation to increase transmission capacity along existing lines, and to extend key trunk lines into currently unserved areas. These augementations and extensions come at a cost. The otpimisation chooses these options when the benefits of accessing high quality renewable energy resources outweights the costs. Results show that for the 80% emission reduction case, there is limited need for transmission capacity increase, and that the existing grid copes well with the increased flows due to conversion to distrubuted renewable energy resources. However, in the 100% case the increased reliance on renewables means that signficant transmission augmentation is beneficial to the overall cost. This strongly suggests that it is important to understand the long term emission target early so that infrastructure investments can be optimised.

  14. Identification of the contribution of contact and aerial biomechanical parameters in acrobatic performance

    PubMed Central

    Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël

    2017-01-01

    Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954

  15. Multi-objective optimisation and decision-making of space station logistics strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  16. Cultural-based particle swarm for dynamic optimisation problems

    NASA Astrophysics Data System (ADS)

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  17. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  18. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  19. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.

    PubMed

    Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N

    2016-07-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.

  20. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    NASA Astrophysics Data System (ADS)

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  1. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.

    PubMed

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-23

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  2. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  3. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    NASA Astrophysics Data System (ADS)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  4. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    PubMed Central

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-01-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585

  5. VLSI Technology for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  6. Achieving optimal SERS through enhanced experimental design.

    PubMed

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  7. Optimisation of decentralisation for effective Disaster Risk Reduction (DRR) through the case study of Indonesia

    NASA Astrophysics Data System (ADS)

    Grady, A.; Makarigakis, A.; Gersonius, B.

    2015-09-01

    This paper investigates how to optimise decentralisation for effective disaster risk reduction (DRR) in developing states. There is currently limited literature on empirical analysis of decentralisation for DRR. This paper evaluates decentralised governance for DRR in the case study of Indonesia and provides recommendations for its optimisation. Wider implications are drawn to optimise decentralisation for DRR in developing states more generally. A framework to evaluate the institutional and policy setting was developed which necessitated the use of a gap analysis, desk study and field investigation. Key challenges to decentralised DRR include capacity gaps at lower levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. DRR authorities should lead coordination and advocacy on DRR. Sustainable multistakeholder platforms and civil society organisations should fill the capacity gap at lower levels. Dedicated and regulated resources for DRR should be compulsory.

  8. A management and optimisation model for water supply planning in water deficit areas

    NASA Astrophysics Data System (ADS)

    Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón

    2014-07-01

    The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.

  9. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation

    PubMed Central

    Zografos, K.; Oliveira, M. S. N.

    2016-01-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523

  10. Crystal structure optimisation using an auxiliary equation of state

    NASA Astrophysics Data System (ADS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  11. Optimisation of confinement in a fusion reactor using a nonlinear turbulence model

    NASA Astrophysics Data System (ADS)

    Highcock, E. G.; Mandell, N. R.; Barnes, M.

    2018-04-01

    The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.

  12. Optimising low molecular weight hydrogels for automated 3D printing.

    PubMed

    Nolan, Michael C; Fuentes Caparrós, Ana M; Dietrich, Bart; Barrow, Michael; Cross, Emily R; Bleuel, Markus; King, Stephen M; Adams, Dave J

    2017-11-22

    Hydrogels prepared from low molecular weight gelators (LMWGs) are formed as a result of hierarchical intermolecular interactions between gelators to form fibres, and then further interactions between the self-assembled fibres via physical entanglements, as well as potential branching points. These interactions can allow hydrogels to recover quickly after a high shear rate has been applied. There are currently limited design rules describing which types of morphology or rheological properties are required for a LMWG hydrogel to be used as an effective, printable gel. By preparing hydrogels with different types of fibrous network structures, we have been able to understand in more detail the morphological type which gives rise to a 3D-printable hydrogel using a range of techniques, including rheology, small angle scattering and microscopy.

  13. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    NASA Astrophysics Data System (ADS)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  14. Pulsed source of spectrally uncorrelated and indistinguishable photons at telecom wavelengths.

    PubMed

    Bruno, N; Martin, A; Guerreiro, T; Sanguinetti, B; Thew, R T

    2014-07-14

    We report on the generation of indistinguishable photon pairs at telecom wavelengths based on a type-II parametric down conversion process in a periodically poled potassium titanyl phosphate (PPKTP) crystal. The phase matching, pump laser characteristics and coupling geometry are optimised to obtain spectrally uncorrelated photons with high coupling efficiencies. Four photons are generated by a counter-propagating pump in the same crystal and anlysed via two photon interference experiments between photons from each pair source as well as joint spectral and g((2)) measurements. We obtain a spectral purity of 0.91 and coupling efficiencies around 90% for all four photons without any filtering. These pure indistinguishable photon sources at telecom wavelengths are perfectly adapted for quantum network demonstrations and other multi-photon protocols.

  15. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  16. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    PubMed

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Artificial neural network modelling of pharmaceutical residue retention times in wastewater extracts using gradient liquid chromatography-high resolution mass spectrometry data.

    PubMed

    Munro, Kelly; Miller, Thomas H; Martins, Claudia P B; Edge, Anthony M; Cowan, David A; Barron, Leon P

    2015-05-29

    The modelling and prediction of reversed-phase chromatographic retention time (tR) under gradient elution conditions for 166 pharmaceuticals in wastewater extracts is presented using artificial neural networks for the first time. Radial basis function, multilayer perceptron and generalised regression neural networks were investigated and a comparison of their predictive ability for model solutions discussed. For real world application, the effect of matrix complexity on tR measurements is presented. Measured tR for some compounds in influent wastewater varied by >1min in comparison to tR in model solutions. Similarly, matrix impact on artificial neural network predictive ability was addressed towards developing a more robust approach for routine screening applications. Overall, the best neural network had a predictive accuracy of <1.3min at the 75th percentile of all measured tR data in wastewater samples (<10% of the total runtime). Coefficients of determination for 30 blind test compounds in wastewater matrices lay at or above R(2)=0.92. Finally, the model was evaluated for application to the semi-targeted identification of pharmaceutical residues during a weeklong wastewater sampling campaign. The model successfully identified native compounds at a rate of 83±4% and 73±5% in influent and effluent extracts, respectively. The use of an HRMS database and the optimised ANN model was also applied to shortlisting of 37 additional compounds in wastewater. Ultimately, this research will potentially enable faster identification of emerging contaminants in the environment through more efficient post-acquisition data mining. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.

  19. Optimisation of rocker sole footwear for prevention of first plantar ulcer: comparison of group-optimised and individually-selected footwear designs.

    PubMed

    Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J

    2017-01-01

    Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p  < 0.001). The group-optimised design incorporated an apex at 52% of shoe length, a 20° rocker angle and an apex angle of 95°. With this design 71-81% of peak pressures were below the 200 kPa threshold, both in the full cohort of individuals with diabetes and also in the neuropathic subgroup. Importantly, only small increases (<5%) in this proportion were observed when participants wore footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.

  20. A new compound arithmetic crossover-based genetic algorithm for constrained optimisation in enterprise systems

    NASA Astrophysics Data System (ADS)

    Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu

    2017-01-01

    In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.

  1. A rapid, automated approach to optimisation of multiple reaction monitoring conditions for quantitative bioanalytical mass spectrometry.

    PubMed

    Higton, D M

    2001-01-01

    An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.

  2. Optimised analytical models of the dielectric properties of biological tissue.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin

    2017-05-01

    The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    PubMed

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  4. Optimisation of phenolic extraction from Averrhoa carambola pomace by response surface methodology and its microencapsulation by spray and freeze drying.

    PubMed

    Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata

    2015-03-15

    Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Analysis of power gating in different hierarchical levels of 2MB cache, considering variation

    NASA Astrophysics Data System (ADS)

    Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza

    2015-09-01

    This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.

  6. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.

  7. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  8. Greenhouse gas network design using backward Lagrangian particle dispersion modelling - Part 1: Methodology and Australian test case

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Nickless, A.; Rayner, P. J.; Law, R. M.; Roff, G.; Fraser, P.

    2014-09-01

    This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes, comprising contributions from the biosphere and fossil fuel combustion, and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS) at 12 km resolution at an hourly timescale. Prior uncertainties are derived on a weekly timescale for biosphere fluxes and fossil fuel emissions from high-resolution model runs using the Community Atmosphere Biosphere Land Exchange (CABLE) model and the Fossil Fuel Data Assimilation System (FFDAS) respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground-based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimisation scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50%, we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.

  9. Multiple response optimisation of processing and formulation parameters of pH sensitive sustained release pellets of capecitabine for targeting colon.

    PubMed

    Pandey, Sonia; Swamy, S M Vijayendra; Gupta, Arti; Koli, Akshay; Patel, Swagat; Maulvi, Furqan; Vyas, Bhavin

    2018-04-29

    To optimise the Eudragit/Surelease ® -coated pH-sensitive pellets for controlled and target drug delivery to the colon tissue and to avoid frequent high dosing and associated side effects which restrict its use in the colorectal-cancer therapy. The pellets were prepared using extrusion-spheronisation technique. Box-Behnken and 3 2 full factorial designs were applied to optimise the process parameters [extruder sieve size, spheroniser-speed, and spheroniser-time] and the coating levels [%w/v of Eudragit S100/Eudragit-L100 and Surelease ® ], respectively, to achieve the smooth optimised size pellets with sustained drug delivery without prior drug release in upper gastrointestinal tract (GIT). The design proposed the optimised batch by selecting independent variables at; extruder sieve size (X 1  = 1 mm), spheroniser speed (X 2  = 900 revolutions per minute, rpm), and spheroniser time (X 3  = 15 min) to achieve pellet size of 0.96 mm, aspect ratio of 0.98, and roundness 97.42%. The 16%w/v coating strength of Surelease ® and 13%w/v coating strength of Eudragit showed pH-dependent sustained release up to 22.35 h (t 99% ). The organ distribution study showed the absence of the drug in the upper part of GIT tissue and the presence of high level of capecitabine in the caecum and colon tissue. Thus, the presence of Eudragit coat prevent the release of drug in stomach and the inner Surelease ® coat showed sustained drug release in the colon tissue. The study demonstrates the potential of optimised Eudragit/Surelease ® -coated capecitabine-pellets for effective colon-targeted delivery system to avoid frequent high dosing and associated systemic side effects of drug.

  10. Applying a Health Network approach to translate evidence-informed policy into practice: a review and case study on musculoskeletal health.

    PubMed

    Briggs, Andrew M; Bragge, Peter; Slater, Helen; Chan, Madelynn; Towler, Simon C B

    2012-11-14

    While translation of evidence into health policy and practice is recognised as critical to optimising health system performance and health-related outcomes for consumers, mechanisms to effectively achieve these goals are neither well understood, nor widely communicated. Health Networks represent a framework which offers a possible solution to this dilemma, particularly in light of emerging evidence regarding the importance of establishing relationships between stakeholders and identifying clinical leaders to drive evidence integration and translation into policy. This is particularly important for service delivery related to chronic diseases. In Western Australia (WA), disease and population-specific Health Networks are comprised of cross-discipline stakeholders who work collaboratively to develop evidence-informed policies and drive their implementation. Since establishment of the Health Networks in WA, over 50 evidence-informed Models of Care (MoCs) have been produced across 18 condition or population-focused Networks. The aim of this paper is to provide an overview of the Health Network framework in facilitating the translation of evidence into policy and practice with a particular focus on musculoskeletal health. A review of activities of the WA Musculoskeletal Health Network was undertaken, focussing on outcomes and the processes used to achieve them in the context of: development of policy, procurement of funding, stakeholder engagement, publications, and projects undertaken by the Network which aligned to implementation of MoCs.The Musculoskeletal Health Network has developed four MoCs which reflect Australian National Health Priority Areas. Establishment of community-based services for consumers with musculoskeletal health conditions is a key recommendation from these MoCs. Through mapping barriers and enablers to policy implementation, working groups, led by local clinical leaders and supported by the broader Network and government officers, have undertaken a range of integrated projects, such as the establishment of a community-based, multidisciplinary rheumatology service. The success of these projects has been contingent on developing relationships between key stakeholders across the health system. In WA, Networks have provided a sustainable mechanism to meaningfully engage consumers, carers, clinicians and other stakeholders; provided a forum to exchange ideas, information and evidence; and collaboratively plan and deliver evidence-based and contextually-appropriate health system improvements for consumers.

  11. Applying a Health Network approach to translate evidence-informed policy into practice: A review and case study on musculoskeletal health

    PubMed Central

    2012-01-01

    Background While translation of evidence into health policy and practice is recognised as critical to optimising health system performance and health-related outcomes for consumers, mechanisms to effectively achieve these goals are neither well understood, nor widely communicated. Health Networks represent a framework which offers a possible solution to this dilemma, particularly in light of emerging evidence regarding the importance of establishing relationships between stakeholders and identifying clinical leaders to drive evidence integration and translation into policy. This is particularly important for service delivery related to chronic diseases. In Western Australia (WA), disease and population-specific Health Networks are comprised of cross-discipline stakeholders who work collaboratively to develop evidence-informed policies and drive their implementation. Since establishment of the Health Networks in WA, over 50 evidence-informed Models of Care (MoCs) have been produced across 18 condition or population-focused Networks. The aim of this paper is to provide an overview of the Health Network framework in facilitating the translation of evidence into policy and practice with a particular focus on musculoskeletal health. Case presentation A review of activities of the WA Musculoskeletal Health Network was undertaken, focussing on outcomes and the processes used to achieve them in the context of: development of policy, procurement of funding, stakeholder engagement, publications, and projects undertaken by the Network which aligned to implementation of MoCs. The Musculoskeletal Health Network has developed four MoCs which reflect Australian National Health Priority Areas. Establishment of community-based services for consumers with musculoskeletal health conditions is a key recommendation from these MoCs. Through mapping barriers and enablers to policy implementation, working groups, led by local clinical leaders and supported by the broader Network and government officers, have undertaken a range of integrated projects, such as the establishment of a community-based, multidisciplinary rheumatology service. The success of these projects has been contingent on developing relationships between key stakeholders across the health system. Conclusions In WA, Networks have provided a sustainable mechanism to meaningfully engage consumers, carers, clinicians and other stakeholders; provided a forum to exchange ideas, information and evidence; and collaboratively plan and deliver evidence-based and contextually-appropriate health system improvements for consumers. PMID:23151082

  12. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  13. Optimisation of warpage on thin shell part by using particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Norshahira, R.; Shayfull, Z.; Nasir, S. M.; Saad, S. M. Sazli; Fathullah, M.

    2017-09-01

    As the product nowadays moving towards thinner design, causing the production of the plastic product facing a lot of difficulties. This is due to the higher possibilities of defects occur as the thickness of the wall gets thinner. Demand for technique in reducing the defects increasing due to this factor. These defects has seen to be occur due to several factors in injection moulding process. In the study a Moldflow software was used in simulating the injection moulding process. While RSM is used in producing the mathematical model to be used as the input fitness function for the Matlab software. Particle Swarm Optimisation (PSO) technique is used in optimising the processing condition to reduce the amount of shrinkage and warpage of the plastic part. The results shows that there are a warpage reduction of 17.60% in x direction, 18.15% in y direction and 10.25% reduction in z direction respectively. The results shows the reliability of this artificial method in minimising the product warpage.

  14. Letter to the editor concerning the article "Effects of acoustic feedback training in elite-standard Para-Rowing" by Schaffert and Mattes (2015).

    PubMed

    Hill, Holger

    2015-01-01

    In a case study, Schaffert and Mattes reported the application of acoustic feedback (sonification) to optimise the time course of boat acceleration. The authors attributed an increased boat speed in the feedback condition to an optimised boat acceleration (mainly during the recovery phase). However, in rowing it is biomechanically impossible to increase the boat speed significantly by reducing the fluctuations in boat acceleration during the rowing cycle. To assess such a, potentially small, optimising effect experimentally, the confounding variables must be controlled very accurately (that is especially the propulsive forces must be kept constant between experimental conditions or the differences in propulsive forces between conditions must be much smaller than the effects on boat speed resulting from an optimised movement pattern). However, this was not controlled adequately by the authors. Instead, the presented boat acceleration data show that the increased boat speed under acoustic feedback was due to increased propulsive forces.

  15. Optimisation of composite bone plates for ulnar transverse fractures.

    PubMed

    Chakladar, N D; Harper, L T; Parsons, A J

    2016-04-01

    Metallic bone plates are commonly used for arm bone fractures where conservative treatment (casts) cannot provide adequate support and compression at the fracture site. These plates, made of stainless steel or titanium alloys, tend to shield stress transfer at the fracture site and delay the bone healing rate. This study investigates the feasibility of adopting advanced composite materials to overcome stress shielding effects by optimising the geometry and mechanical properties of the plate to match more closely to the bone. An ulnar transverse fracture is characterised and finite element techniques are employed to investigate the feasibility of a composite-plated fractured bone construct over a stainless steel equivalent. Numerical models of intact and fractured bones are analysed and the mechanical behaviour is found to agree with experimental data. The mechanical properties are tailored to produce an optimised composite plate, offering a 25% reduction in length and a 70% reduction in mass. The optimised design may help to reduce stress shielding and increase bone healing rates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Thermal energy and economic analysis of a PCM-enhanced household envelope considering different climate zones in Morocco

    NASA Astrophysics Data System (ADS)

    Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine

    2018-07-01

    This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.

  17. Paediatric CT protocol optimisation: a design of experiments to support the modelling and optimisation process.

    PubMed

    Rani, K; Jahnen, A; Noel, A; Wolf, D

    2015-07-01

    In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Multi-objective thermodynamic optimisation of supercritical CO2 Brayton cycles integrated with solar central receivers

    NASA Astrophysics Data System (ADS)

    Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes

    2018-01-01

    In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.

  19. Losses of nutrients and anti-nutrients in red and white sorghum cultivars after decorticating in optimised conditions.

    PubMed

    Galán, María Gimena; Llopart, Emilce Elina; Drago, Silvina Rosa

    2018-05-01

    The aims were to optimise pearling process of red and white sorghum by assessing the effects of pearling time and grain moisture on endosperm yield and flour ash content and to assess nutrient and anti-nutrient losses produced by pearling different cultivars in optimised conditions. Both variables significantly affected both responses. Losses of ashes (58%), proteins (9.5%), lipids (54.5%), Na (37%), Mg (48.5%) and phenolic compounds (43%) were similar among red and white hybrids. However, losses of P (30% vs. 51%), phytic acid (47% vs. 66%), Fe (22% vs. 55%), Zn (32% vs. 62%), Ca (60% vs. 66%), K (46% vs. 61%) and Cu (51% vs. 71%) were lower for red than white sorghum due to different degree of extraction and distribution of components in the grain. Optimised pearling conditions were extrapolated to other hybrids, indicating these criteria could be applied at industrial level to obtain refined flours with proper quality and good endosperm yields.

  20. Natural Erosion of Sandstone as Shape Optimisation.

    PubMed

    Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan

    2017-12-11

    Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.

  1. Optimisation of synergistic biomass-degrading enzyme systems for efficient rice straw hydrolysis using an experimental mixture design.

    PubMed

    Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat

    2012-09-01

    Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veselov, D A; Ayusheva, K R; Shashkin, I S

    2015-10-31

    We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)

  3. Vertical transportation systems embedded on shuffled frog leaping algorithm for manufacturing optimisation problems in industries.

    PubMed

    Aungkulanon, Pasura; Luangpaiboon, Pongchanun

    2016-01-01

    Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.

  4. Cost-aware request routing in multi-geography cloud data centres using software-defined networking

    NASA Astrophysics Data System (ADS)

    Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei

    2017-03-01

    Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.

  5. Intensive care unit patients with lower respiratory tract nosocomial infections: the ENIRRIs project.

    PubMed

    De Pascale, Gennaro; Ranzani, Otavio T; Nseir, Saad; Chastre, Jean; Welte, Tobias; Antonelli, Massimo; Navalesi, Paolo; Garofalo, Eugenio; Bruni, Andrea; Coelho, Luis Miguel; Skoczynski, Szymon; Longhini, Federico; Taccone, Fabio Silvio; Grimaldi, David; Salzer, Helmut J F; Lange, Christoph; Froes, Filipe; Artigas, Antoni; Díaz, Emili; Vallés, Jordi; Rodríguez, Alejandro; Panigada, Mauro; Comellini, Vittoria; Fasano, Luca; Soave, Paolo M; Spinazzola, Giorgia; Luyt, Charles-Edouard; Alvarez-Lerma, Francisco; Marin, Judith; Masclans, Joan Ramon; Chiumello, Davide; Pezzi, Angelo; Schultz, Marcus; Mohamed, Hafiz; Van Der Eerden, Menno; Hoek, Roger A S; Gommers, D A M P J; Pasquale, Marta Di; Civljak, Rok; Kutleša, Marko; Bassetti, Matteo; Dimopoulos, George; Nava, Stefano; Rios, Fernando; Zampieri, Fernando G; Povoa, Pedro; Bos, Lieuwe D; Aliberti, Stefano; Torres, Antoni; Martín-Loeches, Ignacio

    2017-10-01

    The clinical course of intensive care unit (ICU) patients may be complicated by a large spectrum of lower respiratory tract infections (LRTI), defined by specific epidemiological, clinical and microbiological aspects. A European network for ICU-related respiratory infections (ENIRRIs), supported by the European Respiratory Society, has been recently established, with the aim at studying all respiratory tract infective episodes except community-acquired ones. A multicentre, observational study is in progress, enrolling more than 1000 patients fulfilling the clinical, biochemical and radiological findings consistent with a LRTI. This article describes the methodology of this study. A specific interest is the clinical impact of non-ICU-acquired nosocomial pneumonia requiring ICU admission, non-ventilator-associated LRTIs occurring in the ICU, and ventilator-associated tracheobronchitis. The clinical meaning of microbiologically negative infectious episodes and specific details on antibiotic administration modalities, dosages and duration are also highlighted. Recently released guidelines address many unresolved questions which might be answered by such large-scale observational investigations. In light of the paucity of data regarding such topics, new interesting information is expected to be obtained from our network research activities, contributing to optimisation of care for critically ill patients in the ICU.

  6. A Collaboration-Oriented M2M Messaging Mechanism for the Collaborative Automation between Machines in Future Industrial Networks

    PubMed Central

    Gray, John

    2017-01-01

    Machine-to-machine (M2M) communication is a key enabling technology for industrial internet of things (IIoT)-empowered industrial networks, where machines communicate with one another for collaborative automation and intelligent optimisation. This new industrial computing paradigm features high-quality connectivity, ubiquitous messaging, and interoperable interactions between machines. However, manufacturing IIoT applications have specificities that distinguish them from many other internet of things (IoT) scenarios in machine communications. By highlighting the key requirements and the major technical gaps of M2M in industrial applications, this article describes a collaboration-oriented M2M (CoM2M) messaging mechanism focusing on flexible connectivity and discovery, ubiquitous messaging, and semantic interoperability that are well suited for the production line-scale interoperability of manufacturing applications. The designs toward machine collaboration and data interoperability at both the communication and semantic level are presented. Then, the application scenarios of the presented methods are illustrated with a proof-of-concept implementation in the PicknPack food packaging line. Eventually, the advantages and some potential issues are discussed based on the PicknPack practice. PMID:29165347

  7. A probabilistic neural network based approach for predicting the output power of wind turbines

    NASA Astrophysics Data System (ADS)

    Tabatabaei, Sajad

    2017-03-01

    Finding the authentic predicting tools of eliminating the uncertainty of wind speed forecasts is highly required while wind power sources are strongly penetrating. Recently, traditional predicting models of generating point forecasts have no longer been trustee. Thus, the present paper aims at utilising the concept of prediction intervals (PIs) to assess the uncertainty of wind power generation in power systems. Besides, this paper uses a newly introduced non-parametric approach called lower upper bound estimation (LUBE) to build the PIs since the forecasting errors are unable to be modelled properly by applying distribution probability functions. In the present proposed LUBE method, a PI combination-based fuzzy framework is used to overcome the performance instability of neutral networks (NNs) used in LUBE. In comparison to other methods, this formulation more suitably has satisfied the PI coverage and PI normalised average width (PINAW). Since this non-linear problem has a high complexity, a new heuristic-based optimisation algorithm comprising a novel modification is introduced to solve the aforesaid problems. Based on data sets taken from a wind farm in Australia, the feasibility and satisfying performance of the suggested method have been investigated.

  8. A multi-objective model for sustainable recycling of municipal solid waste.

    PubMed

    Mirdar Harijani, Ali; Mansour, Saeed; Karimi, Behrooz

    2017-04-01

    The efficient management of municipal solid waste is a major problem for large and populated cities. In many countries, the majority of municipal solid waste is landfilled or dumped owing to an inefficient waste management system. Therefore, an optimal and sustainable waste management strategy is needed. This study introduces a recycling and disposal network for sustainable utilisation of municipal solid waste. In order to optimise the network, we develop a multi-objective mixed integer linear programming model in which the economic, environmental and social dimensions of sustainability are concurrently balanced. The model is able to: select the best combination of waste treatment facilities; specify the type, location and capacity of waste treatment facilities; determine the allocation of waste to facilities; consider the transportation of waste and distribution of processed products; maximise the profit of the system; minimise the environmental footprint; maximise the social impacts of the system; and eventually generate an optimal and sustainable configuration for municipal solid waste management. The proposed methodology could be applied to any region around the world. Here, the city of Tehran, Iran, is presented as a real case study to show the applicability of the methodology.

  9. Positioning and aligning CNTs by external magnetic field to assist localised epoxy cure

    NASA Astrophysics Data System (ADS)

    Ariu, G.; Hamerton, I.; Ivanov, D.

    2016-01-01

    This work focuses on the generation of conductive networks through the localised alignment of nano fillers, such as multi-walled carbon nanotubes (MWCNTs). The feasibility of alignment and positioning of functionalised MWCNTs by external DC magnetic fields was investigated. The aim of this manipulation is to enhance resin curing through AC induction heating due to hysteresis losses from the nanotubes. Experimental analyses focused on in-depth assessment of the nanotube functionalisation, processing and characterisation of magnetic, rheological and cure kinetics properties of the MWCNT solution. The study has shown that an external magnetic field has great potential for positioning and alignment of CNTs. The study demonstrated potential for creating well-ordered architectures with an unprecedented level of control of network geometry. Magnetic characterisation indicated cobalt-plated nanotubes to be the most suitable candidate for magnetic alignment due to their high magnetic sensitivity. Epoxy/metal-plated CNT nanocomposite systems were validated by thermal analysis as induction heating mediums. The curing process could therefore be optimised by the use of dielectric resins. This study offers a first step towards the proof of concept of this technique as a novel repair technology.

  10. Self-assembly programming of DNA polyominoes.

    PubMed

    Ong, Hui San; Syafiq-Rahim, Mohd; Kasim, Noor Hayaty Abu; Firdaus-Raih, Mohd; Ramlan, Effirul Ikhwan

    2016-10-20

    Fabrication of functional DNA nanostructures operating at a cellular level has been accomplished through molecular programming techniques such as DNA origami and single-stranded tiles (SST). During implementation, restrictive and constraint dependent designs are enforced to ensure conformity is attainable. We propose a concept of DNA polyominoes that promotes flexibility in molecular programming. The fabrication of complex structures is achieved through self-assembly of distinct heterogeneous shapes (i.e., self-organised optimisation among competing DNA basic shapes) with total flexibility during the design and assembly phases. In this study, the plausibility of the approach is validated using the formation of multiple 3×4 DNA network fabricated from five basic DNA shapes with distinct configurations (monomino, tromino and tetrominoes). Computational tools to aid the design of compatible DNA shapes and the structure assembly assessment are presented. The formations of the desired structures were validated using Atomic Force Microscopy (AFM) imagery. Five 3×4 DNA networks were successfully constructed using combinatorics of these five distinct DNA heterogeneous shapes. Our findings revealed that the construction of DNA supra-structures could be achieved using a more natural-like orchestration as compared to the rigid and restrictive conventional approaches adopted previously. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A native IP satellite communications system

    NASA Astrophysics Data System (ADS)

    Koudelka, O.; Schmidt, M.; Ebert, J.; Schlemmer, H.; Kastner-Puschl, S.; Riedler, W.

    2004-08-01

    ≪ In the framework of ESA's ARTES-5 program the Institute of Applied Systems Technology (Joanneum Research) in cooperation with the Department of Communications and Wave Propagation has developed a novel meshed satellite communications system which is optimised for Internet traffic and applications (L*IP—Local Network Interconnection via Satellite Systems Using the IP Protocol Suite). Both symmetrical and asymmetrical connections are supported. Bandwidth on demand and guaranteed quality of service are key features of the system. A novel multi-frequency TDMA access scheme utilises efficient methods of IP encapsulation. In contrast to other solutions it avoids legacy transport network techniques. While the DVB-RCS standard is based on ATM or MPEG transport cells, the solution of the L*IP system uses variable-length cells which reduces the overhead significantly. A flexible and programmable platform based on Linux machines was chosen to allow the easy implementation and adaptation to different standards. This offers the possibility to apply the system not only to satellite communications, but provides seamless integration with terrestrial fixed broadcast wireless access systems. The platform is also an ideal test-bed for a variety of interactive broadband communications systems. The paper describes the system architecture and the key features of the system.

  12. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2012-12-01

    Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. So, it is proposed to virtually augment it by 25, 50, 100 and 160% which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.

  13. Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Khan, U. T.

    2016-12-01

    Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.

  14. Robustness, efficiency, and optimality in the Fenna-Matthews-Olson photosynthetic pigment-protein complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Lewis A.; Habershon, Scott, E-mail: S.Habershon@warwick.ac.uk

    Pigment-protein complexes (PPCs) play a central role in facilitating excitation energy transfer (EET) from light-harvesting antenna complexes to reaction centres in photosynthetic systems; understanding molecular organisation in these biological networks is key to developing better artificial light-harvesting systems. In this article, we combine quantum-mechanical simulations and a network-based picture of transport to investigate how chromophore organization and protein environment in PPCs impacts on EET efficiency and robustness. In a prototypical PPC model, the Fenna-Matthews-Olson (FMO) complex, we consider the impact on EET efficiency of both disrupting the chromophore network and changing the influence of (local and global) environmental dephasing. Surprisingly,more » we find a large degree of resilience to changes in both chromophore network and protein environmental dephasing, the extent of which is greater than previously observed; for example, FMO maintains EET when 50% of the constituent chromophores are removed, or when environmental dephasing fluctuations vary over two orders-of-magnitude relative to the in vivo system. We also highlight the fact that the influence of local dephasing can be strongly dependent on the characteristics of the EET network and the initial excitation; for example, initial excitations resulting in rapid coherent decay are generally insensitive to the environment, whereas the incoherent population decay observed following excitation at weakly coupled chromophores demonstrates a more pronounced dependence on dephasing rate as a result of the greater possibility of local exciton trapping. Finally, we show that the FMO electronic Hamiltonian is not particularly optimised for EET; instead, it is just one of many possible chromophore organisations which demonstrate a good level of EET transport efficiency following excitation at different chromophores. Overall, these robustness and efficiency characteristics are attributed to the highly connected nature of the chromophore network and the presence of multiple EET pathways, features which might easily be built into artificial photosynthetic systems.« less

  15. Potential of dynamic spectrum allocation in LTE macro networks

    NASA Astrophysics Data System (ADS)

    Hoffmann, H.; Ramachandra, P.; Kovács, I. Z.; Jorguseski, L.; Gunnarsson, F.; Kürner, T.

    2015-11-01

    In recent years Mobile Network Operators (MNOs) worldwide are extensively deploying LTE networks in different spectrum bands and utilising different bandwidth configurations. Initially, the deployment is coverage oriented with macro cells using the lower LTE spectrum bands. As the offered traffic (i.e. the requested traffic from the users) increases the LTE deployment evolves with macro cells expanded with additional capacity boosting LTE carriers in higher frequency bands complemented with micro or small cells in traffic hotspot areas. For MNOs it is crucial to use the LTE spectrum assets, as well as the installed network infrastructure, in the most cost efficient way. The dynamic spectrum allocation (DSA) aims at (de)activating the available LTE frequency carriers according to the temporal and spatial traffic variations in order to increase the overall LTE system performance in terms of total network capacity by reducing the interference. This paper evaluates the DSA potential of achieving the envisaged performance improvement and identifying in which system and traffic conditions the DSA should be deployed. A self-optimised network (SON) DSA algorithm is also proposed and evaluated. The evaluations have been carried out in a hexagonal and a realistic site-specific urban macro layout assuming a central traffic hotspot area surrounded with an area of lower traffic with a total size of approximately 8 × 8 km2. The results show that up to 47 % and up to 40 % possible DSA gains are achievable with regards to the carried system load (i.e. used resources) for homogenous traffic distribution with hexagonal layout and for realistic site-specific urban macro layout, respectively. The SON DSA algorithm evaluation in a realistic site-specific urban macro cell deployment scenario including realistic non-uniform spatial traffic distribution shows insignificant cell throughput (i.e. served traffic) performance gains. Nevertheless, in the SON DSA investigations, a gain of up to 25 % has been observed when analysing the resource utilisation in the non-hotspot cells.

  16. Convergent evidence for hierarchical prediction networks from human electrocorticography and magnetoencephalography.

    PubMed

    Phillips, Holly N; Blenkmann, Alejandro; Hughes, Laura E; Kochen, Silvia; Bekinschtein, Tristan A; Cam-Can; Rowe, James B

    2016-09-01

    We propose that sensory inputs are processed in terms of optimised predictions and prediction error signals within hierarchical neurocognitive models. The combination of non-invasive brain imaging and generative network models has provided support for hierarchical frontotemporal interactions in oddball tasks, including recent identification of a temporal expectancy signal acting on prefrontal cortex. However, these studies are limited by the need to invert magnetoencephalographic or electroencephalographic sensor signals to localise activity from cortical 'nodes' in the network, or to infer neural responses from indirect measures such as the fMRI BOLD signal. To overcome this limitation, we examined frontotemporal interactions estimated from direct cortical recordings from two human participants with cortical electrode grids (electrocorticography - ECoG). Their frontotemporal network dynamics were compared to those identified by magnetoencephalography (MEG) in forty healthy adults. All participants performed the same auditory oddball task with standard tones interspersed with five deviant tone types. We normalised post-operative electrode locations to standardised anatomic space, to compare across modalities, and inverted the MEG to cortical sources using the estimated lead field from subject-specific head models. A mismatch negativity signal in frontal and temporal cortex was identified in all subjects. Generative models of the electrocorticographic and magnetoencephalographic data were separately compared using the free-energy estimate of the model evidence. Model comparison confirmed the same critical features of hierarchical frontotemporal networks in each patient as in the group-wise MEG analysis. These features included bilateral, feedforward and feedback frontotemporal modulated connectivity, in addition to an asymmetric expectancy driving input on left frontal cortex. The invasive ECoG provides an important step in construct validation of the use of neural generative models of MEG, which in turn enables generalisation to larger populations. Together, they give convergent evidence for the hierarchical interactions in frontotemporal networks for expectation and processing of sensory inputs. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.

    PubMed

    Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K

    2012-06-01

    Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Evaluation of a High Throughput Starch Analysis Optimised for Wood

    PubMed Central

    Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco

    2014-01-01

    Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863

  19. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics

    PubMed Central

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151

  20. 21SSD: a new public 21-cm EoR database

    NASA Astrophysics Data System (ADS)

    Eames, Evan; Semelin, Benoît

    2018-05-01

    With current efforts inching closer to detecting the 21-cm signal from the Epoch of Reionization (EoR), proper preparation will require publicly available simulated models of the various forms the signal could take. In this work we present a database of such models, available at 21ssd.obspm.fr. The models are created with a fully-coupled radiative hydrodynamic simulation (LICORICE), and are created at high resolution (10243). We also begin to analyse and explore the possible 21-cm EoR signals (with Power Spectra and Pixel Distribution Functions), and study the effects of thermal noise on our ability to recover the signal out to high redshifts. Finally, we begin to explore the concepts of `distance' between different models, which represents a crucial step towards optimising parameter space sampling, training neural networks, and finally extracting parameter values from observations.

  1. An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Heydari, Ali; Balakrishnan, S. N.

    2014-12-01

    The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.

  2. On Optimal Development and Becoming an Optimiser

    ERIC Educational Resources Information Center

    de Ruyter, Doret J.

    2012-01-01

    The article aims to provide a justification for the claim that optimal development and becoming an optimiser are educational ideals that parents should pursue in raising their children. Optimal development is conceptualised as enabling children to grow into flourishing persons, that is persons who have developed (and are still developing) their…

  3. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  4. Discovery and optimisation studies of antimalarial phenotypic hits

    PubMed Central

    Mital, Alka; Murugesan, Dinakaran; Kaiser, Marcel; Yeates, Clive; Gilbert, Ian H.

    2015-01-01

    There is an urgent need for the development of new antimalarial compounds. As a result of a phenotypic screen, several compounds with potent activity against the parasite Plasmodium falciparum were identified. Characterization of these compounds is discussed, along with approaches to optimise the physicochemical properties. The in vitro antimalarial activity of these compounds against P. falciparum K1 had EC50 values in the range of 0.09–29 μM, and generally good selectivity (typically >100-fold) compared to a mammalian cell line (L6). One example showed no significant activity against a rodent model of malaria, and more work is needed to optimise these compounds. PMID:26408453

  5. Design and optimisation of wheel-rail profiles for adhesion improvement

    NASA Astrophysics Data System (ADS)

    Liu, B.; Mei, T. X.; Bruni, S.

    2016-03-01

    This paper describes a study for the optimisation of the wheel profile in the wheel-rail system to increase the overall level of adhesion available at the contact interface, in particular to investigate how the wheel and rail profile combination may be designed to ensure the improved delivery of tractive/braking forces even in poor contact conditions. The research focuses on the geometric combination of both wheel and rail profiles to establish how the contact interface may be optimised to increase the adhesion level, but also to investigate how the change in the property of the contact mechanics at the wheel-rail interface may also lead to changes in the vehicle dynamic behaviour.

  6. Improving Vector Evaluated Particle Swarm Optimisation Using Multiple Nondominated Leaders

    PubMed Central

    Lim, Kian Sheng; Buyamin, Salinda; Ahmad, Anita; Shapiai, Mohd Ibrahim; Naim, Faradila; Mubin, Marizan; Kim, Dong Hwa

    2014-01-01

    The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms. PMID:24883386

  7. Multiple Criteria Evaluation of Quality and Optimisation of e-Learning System Components

    ERIC Educational Resources Information Center

    Kurilovas, Eugenijus; Dagiene, Valentina

    2010-01-01

    The main research object of the paper is investigation and proposal of the comprehensive Learning Object Repositories (LORs) quality evaluation tool suitable for their multiple criteria decision analysis, evaluation and optimisation. Both LORs "internal quality" and "quality in use" evaluation (decision making) criteria are analysed in the paper.…

  8. Optimising the Parallelisation of OpenFOAM Simulations

    DTIC Science & Technology

    2014-06-01

    UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI

  9. Optimising Microbial Growth with a Bench-Top Bioreactor

    ERIC Educational Resources Information Center

    Baker, A. M. R.; Borin, S. L.; Chooi, K. P.; Huang, S. S.; Newgas, A. J. S.; Sodagar, D.; Ziegler, C. A.; Chan, G. H. T.; Walsh, K. A. P.

    2006-01-01

    The effects of impeller size, agitation and aeration on the rate of yeast growth were investigated using bench-top bioreactors. This exercise, carried out over a six-month period, served as an effective demonstration of the importance of different operating parameters on cell growth and provided a means of determining the optimisation conditions…

  10. Engaging Homeless Individuals in Discussion about Their Food Experiences to Optimise Wellbeing: A Pilot Study

    ERIC Educational Resources Information Center

    Pettinger, Clare; Parsons, Julie M.; Cunningham, Miranda; Withers, Lyndsey; D'Aprano, Gia; Letherby, Gayle; Sutton, Carole; Whiteford, Andrew; Ayres, Richard

    2017-01-01

    Objective: High levels of social and economic deprivation are apparent in many UK cities, where there is evidence of certain "marginalised" communities suffering disproportionately from poor nutrition, threatening health. Finding ways to engage with these communities is essential to identify strategies to optimise wellbeing and life…

  11. Discontinuous permeable adsorptive barrier design and cost analysis: a methodological approach to optimisation.

    PubMed

    Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino

    2017-09-19

    The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.

  12. Escalated convergent artificial bee colony

    NASA Astrophysics Data System (ADS)

    Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu

    2016-03-01

    Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.

  13. Optimization of physical conditions for the production of thermostable T1 lipase in Pichia guilliermondii strain SO using response surface methodology.

    PubMed

    Abu, Mary Ladidi; Nooh, Hisham Mohd; Oslan, Siti Nurbaya; Salleh, Abu Bakar

    2017-11-10

    Pichia guilliermondii was found capable of expressing the recombinant thermostable lipase without methanol under the control of methanol dependent alcohol oxidase 1 promoter (AOXp 1). In this study, statistical approaches were employed for the screening and optimisation of physical conditions for T1 lipase production in P. guilliermondii. The screening of six physical conditions by Plackett-Burman Design has identified pH, inoculum size and incubation time as exerting significant effects on lipase production. These three conditions were further optimised using, Box-Behnken Design of Response Surface Methodology, which predicted an optimum medium comprising pH 6, 24 h incubation time and 2% inoculum size. T1 lipase activity of 2.0 U/mL was produced with a biomass of OD 600 23.0. The process of using RSM for optimisation yielded a 3-fold increase of T1 lipase over medium before optimisation. Therefore, this result has proven that T1 lipase can be produced at a higher yield in P. guilliermondii.

  14. Performance Analysis and Discussion on the Thermoelectric Element Footprint for PV-TE Maximum Power Generation

    NASA Astrophysics Data System (ADS)

    Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson

    2018-06-01

    Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.

  15. Joint optimisation of arbitrage profits and battery life degradation for grid storage application of battery electric vehicles

    NASA Astrophysics Data System (ADS)

    Kies, Alexander

    2018-02-01

    To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.

  16. Modelling of auctioning mechanism for solar photovoltaic capacity

    NASA Astrophysics Data System (ADS)

    Poullikkas, Andreas

    2016-10-01

    In this work, a modified optimisation model for the integration of renewable energy sources for power-generation (RES-E) technologies in power-generation systems on a unit commitment basis is developed. The purpose of the modified optimisation procedure is to account for RES-E capacity auctions for different solar photovoltaic (PV) capacity electricity prices. The optimisation model developed uses a genetic algorithm (GA) technique for the calculation of the required RES-E levy (or green tax) in the electricity bills. Also, the procedure enables the estimation of the level of the adequate (or eligible) feed-in-tariff to be offered to future RES-E systems, which do not participate in the capacity auctioning procedure. In order to demonstrate the applicability of the optimisation procedure developed the case of PV capacity auctioning for commercial systems is examined. The results indicated that the required green tax, in order to promote the use of RES-E technologies, which is charged to the electricity customers through their electricity bills, is reduced with the reduction in the final auctioning price. This has a significant effect related to the reduction of electricity bills.

  17. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    PubMed Central

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D

    2017-01-01

    Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852

  18. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  19. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  20. Characterisation and optimisation of flexible transfer lines for liquid helium. Part I: Experimental results

    NASA Astrophysics Data System (ADS)

    Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.

    2016-04-01

    The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.

  1. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  2. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  3. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  4. Optimisation of shape kernel and threshold in image-processing motion analysers.

    PubMed

    Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G

    2001-09-01

    The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.

  5. Fast and fuzzy multi-objective radiotherapy treatment plan generation for head and neck cancer patients with the lexicographic reference point method (LRPM)

    NASA Astrophysics Data System (ADS)

    van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan

    2017-06-01

    Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.

  6. Optimisation of solar synoptic observations

    NASA Astrophysics Data System (ADS)

    Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal

    2012-09-01

    The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.

  7. Treatment planning optimisation in proton therapy

    PubMed Central

    McGowan, S E; Burnet, N G; Lomax, A J

    2013-01-01

    ABSTRACT. The goal of radiotherapy is to achieve uniform target coverage while sparing normal tissue. In proton therapy, the same sources of geometric uncertainty are present as in conventional radiotherapy. However, an important and fundamental difference in proton therapy is that protons have a finite range, highly dependent on the electron density of the material they are traversing, resulting in a steep dose gradient at the distal edge of the Bragg peak. Therefore, an accurate knowledge of the sources and magnitudes of the uncertainties affecting the proton range is essential for producing plans which are robust to these uncertainties. This review describes the current knowledge of the geometric uncertainties and discusses their impact on proton dose plans. The need for patient-specific validation is essential and in cases of complex intensity-modulated proton therapy plans the use of a planning target volume (PTV) may fail to ensure coverage of the target. In cases where a PTV cannot be used, other methods of quantifying plan quality have been investigated. A promising option is to incorporate uncertainties directly into the optimisation algorithm. A further development is the inclusion of robustness into a multicriteria optimisation framework, allowing a multi-objective Pareto optimisation function to balance robustness and conformity. The question remains as to whether adaptive therapy can become an integral part of a proton therapy, to allow re-optimisation during the course of a patient's treatment. The challenge of ensuring that plans are robust to range uncertainties in proton therapy remains, although these methods can provide practical solutions. PMID:23255545

  8. Artificial neural network (ANN) approach for modeling of Pb(II) adsorption from aqueous solution by Antep pistachio (Pistacia Vera L.) shells.

    PubMed

    Yetilmezsoy, Kaan; Demirel, Sevgi

    2008-05-30

    A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.

  9. State-Of in Uav Remote Sensing Survey - First Insights Into Applications of Uav Sensing Systems

    NASA Astrophysics Data System (ADS)

    Aasen, H.

    2017-08-01

    UAVs are increasingly adapted as remote sensing platforms. Together with specialized sensors, they become powerful sensing systems for environmental monitoring and surveying. Spectral data has great capabilities to the gather information about biophysical and biochemical properties. Still, capturing meaningful spectral data in a reproducible way is not trivial. Since a couple of years small and lightweight spectral sensors, which can be carried on small flexible platforms, have become available. With their adaption in the community, the responsibility to ensure the quality of the data is increasingly shifted from specialized companies and agencies to individual researchers or research teams. Due to the complexity of the data acquisition of spectral data, this poses a challenge for the community and standardized protocols, metadata and best practice procedures are needed to make data intercomparable. In November 2016, the ESSEM COST action Innovative optical Tools for proximal sensing of ecophysiological processes (OPTIMISE; http://optimise.dcs.aber.ac.uk/) held a workshop on best practices for UAV spectral sampling. The objective of this meeting was to trace the way from particle to pixel and identify influences on the data quality / reliability, to figure out how well we are currently doing with spectral sampling from UAVs and how we can improve. Additionally, a survey was designed to be distributed within the community to get an overview over the current practices and raise awareness for the topic. This talk will introduce the approach of the OPTIMISE community towards best practises in UAV spectral sampling and present first results of the survey (http://optimise.dcs.aber.ac.uk/uav-survey/). This contribution briefly introduces the survey and gives some insights into the first results given by the interviewees.

  10. Conception et optimisation d'une peau en composite pour une aile adaptative =

    NASA Astrophysics Data System (ADS)

    Michaud, Francois

    Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.

  11. Effects of the distribution density of a biomass combined heat and power plant network on heat utilisation efficiency in village-town systems.

    PubMed

    Zhang, Yifei; Kang, Jian

    2017-11-01

    The building of biomass combined heat and power (CHP) plants is an effective means of developing biomass energy because they can satisfy demands for winter heating and electricity consumption. The purpose of this study was to analyse the effect of the distribution density of a biomass CHP plant network on heat utilisation efficiency in a village-town system. The distribution density is determined based on the heat transmission threshold, and the heat utilisation efficiency is determined based on the heat demand distribution, heat output efficiency, and heat transmission loss. The objective of this study was to ascertain the optimal value for the heat transmission threshold using a multi-scheme comparison based on an analysis of these factors. To this end, a model of a biomass CHP plant network was built using geographic information system tools to simulate and generate three planning schemes with different heat transmission thresholds (6, 8, and 10 km) according to the heat demand distribution. The heat utilisation efficiencies of these planning schemes were then compared by calculating the gross power, heat output efficiency, and heat transmission loss of the biomass CHP plant for each scenario. This multi-scheme comparison yielded the following results: when the heat transmission threshold was low, the distribution density of the biomass CHP plant network was high and the biomass CHP plants tended to be relatively small. In contrast, when the heat transmission threshold was high, the distribution density of the network was low and the biomass CHP plants tended to be relatively large. When the heat transmission threshold was 8 km, the distribution density of the biomass CHP plant network was optimised for efficient heat utilisation. To promote the development of renewable energy sources, a planning scheme for a biomass CHP plant network that maximises heat utilisation efficiency can be obtained using the optimal heat transmission threshold and the nonlinearity coefficient for local roads. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Prior knowledge guided active modules identification: an integrated multi-objective approach.

    PubMed

    Chen, Weiqi; Liu, Jing; He, Shan

    2017-03-14

    Active module, defined as an area in biological network that shows striking changes in molecular activity or phenotypic signatures, is important to reveal dynamic and process-specific information that is correlated with cellular or disease states. A prior information guided active module identification approach is proposed to detect modules that are both active and enriched by prior knowledge. We formulate the active module identification problem as a multi-objective optimisation problem, which consists two conflicting objective functions of maximising the coverage of known biological pathways and the activity of the active module simultaneously. Network is constructed from protein-protein interaction database. A beta-uniform-mixture model is used to estimate the distribution of p-values and generate scores for activity measurement from microarray data. A multi-objective evolutionary algorithm is used to search for Pareto optimal solutions. We also incorporate a novel constraints based on algebraic connectivity to ensure the connectedness of the identified active modules. Application of proposed algorithm on a small yeast molecular network shows that it can identify modules with high activities and with more cross-talk nodes between related functional groups. The Pareto solutions generated by the algorithm provides solutions with different trade-off between prior knowledge and novel information from data. The approach is then applied on microarray data from diclofenac-treated yeast cells to build network and identify modules to elucidate the molecular mechanisms of diclofenac toxicity and resistance. Gene ontology analysis is applied to the identified modules for biological interpretation. Integrating knowledge of functional groups into the identification of active module is an effective method and provides a flexible control of balance between pure data-driven method and prior information guidance.

  13. Specialist integrated haematological malignancy diagnostic services: an Activity Based Cost (ABC) analysis of a networked laboratory service model.

    PubMed

    Dalley, C; Basarir, H; Wright, J G; Fernando, M; Pearson, D; Ward, S E; Thokula, P; Krishnankutty, A; Wilson, G; Dalton, A; Talley, P; Barnett, D; Hughes, D; Porter, N R; Reilly, J T; Snowden, J A

    2015-04-01

    Specialist Integrated Haematological Malignancy Diagnostic Services (SIHMDS) were introduced as a standard of care within the UK National Health Service to reduce diagnostic error and improve clinical outcomes. Two broad models of service delivery have become established: 'co-located' services operating from a single-site and 'networked' services, with geographically separated laboratories linked by common management and information systems. Detailed systematic cost analysis has never been published on any established SIHMDS model. We used Activity Based Costing (ABC) to construct a cost model for our regional 'networked' SIHMDS covering a two-million population based on activity in 2011. Overall estimated annual running costs were £1 056 260 per annum (£733 400 excluding consultant costs), with individual running costs for diagnosis, staging, disease monitoring and end of treatment assessment components of £723 138, £55 302, £184 152 and £94 134 per annum, respectively. The cost distribution by department was 28.5% for haematology, 29.5% for histopathology and 42% for genetics laboratories. Costs of the diagnostic pathways varied considerably; pathways for myelodysplastic syndromes and lymphoma were the most expensive and the pathways for essential thrombocythaemia and polycythaemia vera being the least. ABC analysis enables estimation of running costs of a SIHMDS model comprised of 'networked' laboratories. Similar cost analyses for other SIHMDS models covering varying populations are warranted to optimise quality and cost-effectiveness in delivery of modern haemato-oncology diagnostic services in the UK as well as internationally. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study.

    PubMed

    De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W

    2017-12-01

    Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was 0.82. The overall performance of the presented automated pilot technique to stage lower third molar development on panoramic radiographs was similar to staging by human observers. It will be further optimised in future research, since it represents a necessary step to achieve a fully automated dental age estimation method, which to date is not available.

  15. Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.

    PubMed

    Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P

    2017-01-01

    To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.

  16. 76 FR 21087 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... developed an enhanced technology trading platform (the ``Optimise platform''). To assure a smooth transition... Optimise trading platform and will continue to do so up to the launch of the new technology and during the... tested and is available for the launch. The Exchange believes that it will be less disruptive to members...

  17. High School Learners' Mental Construction during Solving Optimisation Problems in Calculus: A South African Case Study

    ERIC Educational Resources Information Center

    Brijlall, Deonarain; Ndlovu, Zanele

    2013-01-01

    This qualitative case study in a rural school in Umgungundlovu District in KwaZulu-Natal, South Africa, explored Grade 12 learners' mental constructions of mathematical knowledge during engagement with optimisation problems. Ten Grade 12 learners who do pure Mathemat-ics participated, and data were collected through structured activity sheets and…

  18. Optimising the Blended Learning Environment: The Arab Open University Experience

    ERIC Educational Resources Information Center

    Hamdi, Tahrir; Abu Qudais, Mohammed

    2018-01-01

    This paper will offer some insights into possible ways to optimise the blended learning environment based on experience with this modality of teaching at Arab Open University/Jordan branch and also by reflecting upon the results of several meta-analytical studies, which have shown blended learning environments to be more effective than their face…

  19. Critical review of membrane bioreactor models--part 2: hydrodynamic and integrated models.

    PubMed

    Naessens, W; Maere, T; Ratkovich, N; Vedantam, S; Nopens, I

    2012-10-01

    Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can heavily benefit from mathematical modelling. In this paper, the vast literature on hydrodynamic and integrated MBR modelling is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones including costs are leaning towards optimisation. Trends are discussed, knowledge gaps identified and interesting routes for further research suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Optimisation of oxygen ion transport in materials for ceramic membrane devices.

    PubMed

    Kilner, J A

    2007-01-01

    Oxygen transport in ceramic oxide materials has received much attention over the past few decades. Much of this interest has stemmed from the desire to construct high temperature electrochemical devices for energy conversion, an example being the solid oxide fuel cell. In order to achieve high performance for these devices, insights are needed in how to achieve optimum performance from the functional components such as the electrolytes and electrodes. This includes the optimisation of oxygen transport through the crystal lattice of electrode and electrolyte materials and across the homogeneous (grain boundary) and heterogeneous interfaces that exist in real devices. Strategies are discussed for the optimisation of these quantities and current problems in the characterisation of interfacial transport are explored.

  1. Amorphous Ge quantum dots embedded in crystalline Si: ab initio results.

    PubMed

    Laubscher, M; Küfner, S; Kroll, P; Bechstedt, F

    2015-10-14

    We study amorphous Ge quantum dots embedded in a crystalline Si matrix through structure modeling and simulation using ab initio density functional theory including spin-orbit interaction and quasiparticle effects. Three models are generated by replacing a spherical region within diamond Si by Ge atoms and creating a disordered bond network with appropriate density inside the Ge quantum dot. After total-energy optimisations of the atomic geometry we compute the electronic and optical properties. We find three major effects: (i) the resulting nanostructures adopt a type-I heterostructure character; (ii) the lowest optical transitions occur only within the Ge quantum dots, and do not involve or cross the Ge-Si interface. (iii) for larger amorphous Ge quantum dots, with diameters of about 2.0 and 2.7 nm, absorption peaks appear in the mid-infrared spectral region. These are promising candidates for intense luminescence at photon energies below the gap energy of bulk Ge.

  2. Safety and governance issues for neonatal transport services.

    PubMed

    Ratnavel, Nandiran

    2009-08-01

    Neonatal transport is a subspecialty within the field of neonatology. Transport services are developing rapidly in the United Kingdom (UK) with network demographics and funding patterns leading to a broad spectrum of service provision. Applying principles of clinical governance and safety to such a diverse landscape of transport services is challenging but finally receiving much needed attention. To understand issues of risk management associated with this branch of retrieval medicine one needs to look at the infrastructure of transport teams, arrangements for governance, risk identification, incident reporting, feedback and learning from experience. One also needs to look at audit processes, training, communication and ways of team working. Adherence to current recommendations for equipment and vehicle design are vital. The national picture for neonatal transport is evolving. This is an excellent time to start benchmarking and sharing best practice with a view to optimising safety and reducing risk.

  3. Updating irradiated graphite disposal: Project 'GRAPA' and the international decommissioning network.

    PubMed

    Wickham, Anthony; Steinmetz, Hans-Jürgen; O'Sullivan, Patrick; Ojovan, Michael I

    2017-05-01

    Demonstrating competence in planning and executing the disposal of radioactive wastes is a key factor in the public perception of the nuclear power industry and must be demonstrated when making the case for new nuclear build. This work addresses the particular waste stream of irradiated graphite, mostly derived from reactor moderators and amounting to more than 250,000 tonnes world-wide. Use may be made of its unique chemical and physical properties to consider possible processing and disposal options outside the normal simple classifications and repository options for mixed low or intermediate-level wastes. The IAEA has an obvious involvement in radioactive waste disposal and has established a new project 'GRAPA' - Irradiated Graphite Processing Approaches - to encourage an international debate and collaborative work aimed at optimising and facilitating the treatment of irradiated graphite. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A wireless energy transfer platform, integrated at the bedside.

    PubMed

    De Clercq, Hans; Puers, Robert

    2013-01-01

    This paper presents the design of a wireless energy transfer platform, integrated at the bedside. The system contains a matrix of identical inductive power transmitters, which are optimised to provide power to a wearable sensor network, with the purpose of wirelessly recording vital signals over an extended period of time. The magnetic link, operates at a transfer frequency of 6.78MHz and is able to transfer a power of 3.3mW to the remote side at an inter-coil distance of 100mm. The total efficiency of the power link is 26%. Moreover, the platform is able to dynamically determine the position of freely moving sensor nodes and selectively induce a magnetic field in the area where the sensor nodes are positioned. As a result, the patient will not be subjected to unnecessary radiation and the specific absorption rate standards are met more easily.

  5. Design of a new low-phase-noise millimetre-wave quadrature voltage-controlled oscillator

    NASA Astrophysics Data System (ADS)

    Kashani, Zeinab; Nabavi, Abdolreza

    2018-07-01

    This paper presents a new circuit topology of millimetre-wave quadrature voltage-controlled oscillator (QVCO) using an improved Colpitts oscillator without tail bias. By employing an extra capacitance between the drain and source terminations of the transistors and optimising circuit values, a low-power and low-phase-noise (PN) oscillator is designed. For generating the output signals with 90° phase difference, a self-injection coupling network between two identical cores is used. The proposed QVCO dissipates no extra dc power for coupling, since there is no dc-path to ground for the coupled transistors and no extra noise is added to circuit. The best figure-of-merit is -188.5, the power consumption is 14.98-15.45 mW, in a standard 180-nm CMOS technology, for 58.2 GHz center frequency from 59.3 to 59.6 GHz. The PN is -104.86 dBc/Hz at 1-MHz offset.

  6. Development of an ANN optimized mucoadhesive buccal tablet containing flurbiprofen and lidocaine for dental pain.

    PubMed

    Hussain, Amjad; Syed, Muhammad Ali; Abbas, Nasir; Hanif, Sana; Arshad, Muhammad Sohail; Bukhari, Nadeem Irfan; Hussain, Khalid; Akhlaq, Muhammad; Ahmad, Zeeshan

    2016-06-01

    A novel mucoadhesive buccal tablet containing flurbiprofen (FLB) and lidocaine HCl (LID) was prepared to relieve dental pain. Tablet formulations (F1-F9) were prepared using variable quantities of mucoadhesive agents, hydroxypropyl methyl cellulose (HPMC) and sodium alginate (SA). The formulations were evaluated for their physicochemical properties, mucoadhesive strength and mucoadhesion time, swellability index and in vitro release of active agents. Release of both drugs depended on the relative ratio of HPMC:SA. However, mucoadhesive strength and mucoadhesion time were better in formulations, containing higher proportions of HPMC compared to SA. An artificial neural network (ANN) approach was applied to optimise formulations based on known effective parameters (i.e., mucoadhesive strength, mucoadhesion time and drug release), which proved valuable. This study indicates that an effective buccal tablet formulation of flurbiprofen and lidocaine can be prepared via an optimized ANN approach.

  7. Mining nutrigenetics patterns related to obesity: use of parallel multifactor dimensionality reduction.

    PubMed

    Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K

    2015-01-01

    This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.

  8. Use of historic metabolic biotransformation data as a means of anticipating metabolic sites using MetaPrint2D and Bioclipse.

    PubMed

    Carlsson, Lars; Spjuth, Ola; Adams, Samuel; Glen, Robert C; Boyer, Scott

    2010-07-01

    Predicting metabolic sites is important in the drug discovery process to aid in rapid compound optimisation. No interactive tool exists and most of the useful tools are quite expensive. Here a fast and reliable method to analyse ligands and visualise potential metabolic sites is presented which is based on annotated metabolic data, described by circular fingerprints. The method is available via the graphical workbench Bioclipse, which is equipped with advanced features in cheminformatics. Due to the speed of predictions (less than 50 ms per molecule), scientists can get real time decision support when editing chemical structures. Bioclipse is a rich client, which means that all calculations are performed on the local computer and do not require network connection. Bioclipse and MetaPrint2D are free for all users, released under open source licenses, and available from http://www.bioclipse.net.

  9. Effects of real time control of sewer systems on treatment plant performance and receiving water quality.

    PubMed

    Frehmann, T; Niemann, A; Ustohal, P; Geiger, W F

    2002-01-01

    Four individual mathematical submodels simulating different subsystems of urban drainage were intercoupled to an integral model. The submodels (for surface runoff, flow in sewer system, wastewater treatment plant and receiving water) were calibrated on the basis of field data measured in an existing urban catchment investigation. Three different strategies for controlling the discharge in the sewer network were defined and implemented in the integral model. The impact of these control measures was quantified by representative immission state-parameters of the receiving water. The results reveal that the effect of a control measure may be ambivalent, depending on the referred component of a complex drainage system. Furthermore, it is demonstrated that the drainage system in the catchment investigation can be considerably optimised towards environmental protection and operation efficiency if an appropriate real time control on the integral scale is applied.

  10. Access Protocol For An Industrial Optical Fibre LAN

    NASA Astrophysics Data System (ADS)

    Senior, John M.; Walker, William M.; Ryley, Alan

    1987-09-01

    A structure for OSI levels 1 and 2 of a local area network suitable for use in a variety of industrial environments is reported. It is intended that the LAN will utilise optical fibre technology at the physical level and a hybrid of dynamically optimisable token passing and CSMA/CD techniques at the data link (IEEE 802 medium access control - logical link control) level. An intelligent token passing algorithm is employed which dynamically allocates tokens according to the known upper limits on the requirements of each device. In addition a system of stochastic tokens is used to increase efficiency when the stochastic traffic is significant. The protocol also allows user-defined priority systems to be employed and is suitable for distributed or centralised implementation. The results of computer simulated performance characteristics for the protocol using a star-ring topology are reported which demonstrate its ability to perform efficiently with the device and traffic loads anticipated within an industrial environment.

  11. Integration of Power to Methane in a waste water treatment plant - A feasibility study.

    PubMed

    Patterson, Tim; Savvas, Savvas; Chong, Alex; Law, Ian; Dinsdale, Richard; Esteves, Sandra

    2017-12-01

    The integration of a biomethanation system within a wastewater treatment plant for conversion of CO 2 and H 2 to CH 4 has been studied. Results indicate that the CO 2 could be utilised to produce an additional 13,420m 3 /day of CH 4 , equivalent to approximately 133,826kWh of energy. The whole conversion process including electrolysis was found to have an energetic efficiency of 66.2%. The currently un-optimised biomethanation element of the process had a parasitic load of 19.9% of produced energy and strategies to reduce this to <5% are identified. The system could provide strategic benefits such as integrated management of electricity and gas networks, energy storage and maximising the deployment and efficiency of renewable energy assets. However, no policy or financial frameworks exist to attribute value to these increasingly important functions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Monitoring methods and predictive models for water status in Jonathan apples.

    PubMed

    Trincă, Lucia Carmen; Căpraru, Adina-Mirela; Arotăriţei, Dragoş; Volf, Irina; Chiruţă, Ciprian

    2014-02-01

    Evaluation of water status in Jonathan apples was performed for 20 days. Loss moisture content (LMC) was carried out through slow drying of wholes apples and the moisture content (MC) was carried out through oven drying and lyophilisation for apple samples (chunks, crushed and juice). We approached a non-destructive method to evaluate LMC and MC of apples using image processing and multilayer neural networks (NN) predictor. We proposed a new simple algorithm that selects the texture descriptors based on initial set heuristically chosen. Both structure and weights of NN are optimised by a genetic algorithm with variable length genotype that led to a high precision of the predictive model (R(2)=0.9534). In our opinion, the developing of this non-destructive method for the assessment of LMC and MC (and of other chemical parameters) seems to be very promising in online inspection of food quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Psychotropic substances in indoor environments.

    PubMed

    Cecinato, Angelo; Romagnoli, Paola; Perilli, Mattia; Patriarca, Claudia; Balducci, Catia

    2014-10-01

    The presence of drugs in outdoor air has been established, but few investigations have been conducted indoors. This study focused on psychotropic substances (PSs) at three schools, four homes and one office in Rome, Italy. The indoor drug concentrations and the relationships with the outdoor atmosphere were investigated. The optimised monitoring procedure allowed for the determination of cocaine, cannabinoids and particulate fractions of nicotine and caffeine. In-field experiments were performed during the winter, spring and summer seasons. Psychotropic substances were observed in all indoor locations. The indoor concentrations often exceeded those recorded both outdoors at the same sites and at the atmospheric pollution control network stations, indicating that the drugs were released into the air at the inside sites or were more persistent. During winter, the relative concentrations of cannabinol, cannabidiol and tetrahydrocannabinol depended on site and indoor/outdoor location at the site. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Performance evaluation of reactive and proactive routing protocol in IEEE 802.11 ad hoc network

    NASA Astrophysics Data System (ADS)

    Hamma, Salima; Cizeron, Eddy; Issaka, Hafiz; Guédon, Jean-Pierre

    2006-10-01

    Wireless technology based on the IEEE 802.11 standard is widely deployed. This technology is used to support multiple types of communication services (data, voice, image) with different QoS requirements. MANET (Mobile Adhoc NETwork) does not require a fixed infrastructure. Mobile nodes communicate through multihop paths. The wireless communication medium has variable and unpredictable characteristics. Furthermore, node mobility creates a continuously changing communication topology in which paths break and new one form dynamically. The routing table of each router in an adhoc network must be kept up-to-date. MANET uses Distance Vector or Link State algorithms which insure that the route to every host is always known. However, this approach must take into account the adhoc networks specific characteristics: dynamic topologies, limited bandwidth, energy constraints, limited physical security, ... Two main routing protocols categories are studied in this paper: proactive protocols (e.g. Optimised Link State Routing - OLSR) and reactive protocols (e.g. Ad hoc On Demand Distance Vector - AODV, Dynamic Source Routing - DSR). The proactive protocols are based on periodic exchanges that update the routing tables to all possible destinations, even if no traffic goes through. The reactive protocols are based on on-demand route discoveries that update routing tables only for the destination that has traffic going through. The present paper focuses on study and performance evaluation of these categories using NS2 simulations. We have considered qualitative and quantitative criteria. The first one concerns distributed operation, loop-freedom, security, sleep period operation. The second are used to assess performance of different routing protocols presented in this paper. We can list end-to-end data delay, jitter, packet delivery ratio, routing load, activity distribution. Comparative study will be presented with number of networking context consideration and the results show the appropriate routing protocol for two kinds of communication services (data and voice).

  15. Exploring the impact of big data in economic geology using cloud-based synthetic sensor networks

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Robertson, J.

    2015-12-01

    In a market demanding lower resource prices and increasing efficiencies, resources companies are increasingly looking to the realm of real-time, high-frequency data streams to better measure and manage their minerals processing chain, from pit to plant to port. Sensor streams can include real-time drilling engineering information, data streams from mining trucks, and on-stream sensors operating in the plant feeding back rich chemical information. There are also many opportunities to deploy new sensor streams - unlike environmental monitoring networks, the mine environment is not energy- or bandwidth-limited. Although the promised efficiency dividends are inviting, the path to achieving these is difficult to see for most companies. As well as knowing where to invest in new sensor technology and how to integrate the new data streams, companies must grapple with risk-laden changes to their established methods of control to achieve maximum gains. What is required is a sandbox data environment for the development of analysis and control strategies at scale, allowing companies to de-risk proposed changes before actually deploying them to a live mine environment. In this presentation we describe our approach to simulating real-time scaleable data streams in a mine environment. Our sandbox consists of three layers: (a) a ground-truth layer that contains geological models, which can be statistically based on historical operations data, (b) a measurement layer - a network of RESTful synthetic sensor microservices which can simulate measurements of ground-truth properties, and (c) a control layer, which integrates the sensor streams and drives the measurement and optimisation strategies. The control layer could be a new machine learner, or simply a company's existing data infrastructure. Containerisation allows rapid deployment of large numbers of sensors, as well as service discovery to form a dynamic network of thousands of sensors, at a far lower cost than physically building the network.

  16. The optimisation of low-acceleration interstellar relativistic rocket trajectories using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Fung, Kenneth K. H.; Lewis, Geraint F.; Wu, Xiaofeng

    2017-04-01

    A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2 c . Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1 g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.

  17. AlphaMate: a program for optimising selection, maintenance of diversity, and mate allocation in breeding programs.

    PubMed

    Gorjanc, Gregor; Hickey, John M

    2018-05-02

    AlphaMate is a flexible program that optimises selection, maintenance of genetic diversity, and mate allocation in breeding programs. It can be used in animal and cross- and self-pollinating plant populations. These populations can be subject to selective breeding or conservation management. The problem is formulated as a multi-objective optimisation of a valid mating plan that is solved with an evolutionary algorithm. A valid mating plan is defined by a combination of mating constraints (the number of matings, the maximal number of parents, the minimal/equal/maximal number of contributions per parent, or allowance for selfing) that are gender specific or generic. The optimisation can maximize genetic gain, minimize group coancestry, minimize inbreeding of individual matings, or maximize genetic gain for a given increase in group coancestry or inbreeding. Users provide a list of candidate individuals with associated gender and selection criteria information (if applicable) and coancestry matrix. Selection criteria and coancestry matrix can be based on pedigree or genome-wide markers. Additional individual or mating specific information can be included to enrich optimisation objectives. An example of rapid recurrent genomic selection in wheat demonstrates how AlphaMate can double the efficiency of converting genetic diversity into genetic gain compared to truncation selection. Another example demonstrates the use of genome editing to expand the gain-diversity frontier. Executable versions of AlphaMate for Windows, Mac, and Linux platforms are available at http://www.AlphaGenes.roslin.ed.ac.uk/AlphaMate. gregor.gorjanc@roslin.ed.ack.uk.

  18. Analysis of the car body stability performance after coupler jack-knifing during braking

    NASA Astrophysics Data System (ADS)

    Guo, Lirong; Wang, Kaiyun; Chen, Zaigang; Shi, Zhiyong; Lv, Kaikai; Ji, Tiancheng

    2018-06-01

    This paper aims to improve car body stability performance by optimising locomotive parameters when coupler jack-knifing occurs during braking. In order to prevent car body instability behaviour caused by coupler jack-knifing, a multi-locomotive simulation model and a series of field braking tests are developed to analyse the influence of the secondary suspension and the secondary lateral stopper on the car body stability performance during braking. According to simulation and test results, increasing secondary lateral stiffness contributes to limit car body yaw angle during braking. However, it seriously affects the dynamic performance of the locomotive. For the secondary lateral stopper, its lateral stiffness and free clearance have a significant influence on improving the car body stability capacity, and have less effect on the dynamic performance of the locomotive. An optimised measure was proposed and adopted on the test locomotive. For the optimised locomotive, the lateral stiffness of secondary lateral stopper is increased to 7875 kN/m, while its free clearance is decreased to 10 mm. The optimised locomotive has excellent dynamic and safety performance. Comparing with the original locomotive, the maximum car body yaw angle and coupler rotation angle of the optimised locomotive were reduced by 59.25% and 53.19%, respectively, according to the practical application. The maximum derailment coefficient was 0.32, and the maximum wheelset lateral force was 39.5 kN. Hence, reasonable parameters of secondary lateral stopper can improve the car body stability capacity and the running safety of the heavy haul locomotive.

  19. Dietary optimisation with omega-3 and omega-6 fatty acids for 12-23-month-old overweight and obese children in urban Jakarta.

    PubMed

    Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina

    2016-12-01

    Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.

  20. The Optimisation of the Expression of Recombinant Surface Immunogenic Protein of Group B Streptococcus in Escherichia coli by Response Surface Methodology Improves Humoral Immunity.

    PubMed

    Díaz-Dinamarca, Diego A; Jerias, José I; Soto, Daniel A; Soto, Jorge A; Díaz, Natalia V; Leyton, Yessica Y; Villegas, Rodrigo A; Kalergis, Alexis M; Vásquez, Abel E

    2018-03-01

    Group B Streptococcus (GBS) is the leading cause of neonatal meningitis and a common pathogen in livestock and aquaculture industries around the world. Conjugate polysaccharide and protein-based vaccines are under development. The surface immunogenic protein (SIP) is a conserved protein in all GBS serotypes and has been shown to be a good target for vaccine development. The expression of recombinant proteins in Escherichia coli cells has been shown to be useful in the development of vaccines, and the protein purification is a factor affecting their immunogenicity. The response surface methodology (RSM) and Box-Behnken design can optimise the performance in the expression of recombinant proteins. However, the biological effect in mice immunised with an immunogenic protein that is optimised by RSM and purified by low-affinity chromatography is unknown. In this study, we used RSM for the optimisation of the expression of the rSIP, and we evaluated the SIP-specific humoral response and the property to decrease the GBS colonisation in the vaginal tract in female mice. It was observed by NI-NTA chromatography that the RSM increases the yield in the expression of rSIP, generating a better purification process. This improvement in rSIP purification suggests a better induction of IgG anti-SIP immune response and a positive effect in the decreased GBS intravaginal colonisation. The RSM applied to optimise the expression of recombinant proteins with immunogenic capacity is an interesting alternative in the evaluation of vaccines in preclinical phase, which could improve their immune response.

  1. Optimising the Collaborative Practice of Nurses in Primary Care Settings Using a Knowledge Translation Approach

    ERIC Educational Resources Information Center

    Oelke, Nelly; Wilhelm, Amanda; Jackson, Karen

    2016-01-01

    The role of nurses in primary care is poorly understood and many are not working to their full scope of practice. Building on previous research, this knowledge translation (KT) project's aim was to facilitate nurses' capacity to optimise their practice in these settings. A Summit engaging Alberta stakeholders in a deliberative discussion was the…

  2. Quadratic Optimisation with One Quadratic Equality Constraint

    DTIC Science & Technology

    2010-06-01

    This report presents a theoretical framework for minimising a quadratic objective function subject to a quadratic equality constraint. The first part of the report gives a detailed algorithm which computes the global minimiser without calling special nonlinear optimisation solvers. The second part of the report shows how the developed theory can be applied to solve the time of arrival geolocation problem.

  3. Optimising fuel treatments over time and space

    Treesearch

    Woodam Chung; Greg Jones; Kurt Krueger; Jody Bramel; Marco Contreras

    2013-01-01

    Fuel treatments have been widely used as a tool to reduce catastrophic wildland fire risks in many forests around the world. However, it is a challenging task for forest managers to prioritise where, when and how to implement fuel treatments across a large forest landscape. In this study, an optimisation model was developed for long-term fuel management decisions at a...

  4. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  5. Fox-7 for Insensitive Boosters

    DTIC Science & Technology

    2010-08-01

    cavitation , and therefore nucleation, to occur at each frequency. As well as producing ultrasound at different frequencies, the method of delivery of...processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology to improve booster formulations, and results from these...7 booster formulations. Also included are particle processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology

  6. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  7. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  8. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  9. Subject Specific Optimisation of the Stiffness of Footwear Material for Maximum Plantar Pressure Reduction.

    PubMed

    Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan

    2017-08-01

    Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.

  10. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  11. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  12. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  13. Model fitting for small skin permeability data sets: hyperparameter optimisation in Gaussian Process Regression.

    PubMed

    Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick

    2018-03-01

    The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.

  14. Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms.

    PubMed

    Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter

    2007-01-01

    A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.

  15. A new effective operator for the hybrid algorithm for solving global optimisation problems

    NASA Astrophysics Data System (ADS)

    Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac

    2018-04-01

    Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.

  16. Orbital optimisation in the perfect pairing hierarchy: applications to full-valence calculations on linear polyacenes

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2018-03-01

    We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.

  17. Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping

    NASA Astrophysics Data System (ADS)

    Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan

    2016-12-01

    This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.

  18. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  19. Honeybee economics: optimisation of foraging in a variable world.

    PubMed

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  20. Optimisation of logistics processes of energy grass collection

    NASA Astrophysics Data System (ADS)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu

  1. Lithospheric architecture of NE China from joint Inversions of receiver functions and surface wave dispersion through Bayesian optimisation

    NASA Astrophysics Data System (ADS)

    Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian

    2017-04-01

    The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.

  2. Status of radiation protection in various interventional cardiology procedures in the Asia Pacific region

    PubMed Central

    Tsapaki, Virginia; Faruque Ghulam, Mohammed; Lim, Soo Teik; Ngo Minh, Hung; Nwe, Nwe; Sharma, Anil; Sim, Kui-Hian; Srimahachota, Suphot; Rehani, Madan Mohan

    2011-01-01

    Objective Increasing use of interventional procedures in cardiology with unknown levels of radiation protection in many countries of Asia-Pacific region necessitates the need for status assessment. The study was part of an International Atomic Energy Agency (IAEA) project for achieving improved radiation protection in interventional cardiology (IC) in developing countries. Design The survey covers 18 cardiac catheterisation laboratories in seven countries (Bangladesh, India, Malaysia, Myanmar, Singapore, Thailand and Vietnam). An important step was the creation of the ‘Asian network of Cardiologists in Radiation Protection’ and a newsletter. Data were collected on: radiation protection tools, number of IC laboratories, and annual number of various IC paediatric and adult procedures in the hospital and in the country. Patient radiation dose data were collected in terms of Kerma Area Product (KAP) and cumulative dose (CD). Results It is encouraging that protection devices for staff are largely used in the routine practice. Only 39% of the angiographic machines were equipped with a KAP meter. Operators' initial lack of awareness on radiation-protection optimisation improved significantly after participation in IAEA radiation-protection training. Only two out of five countries reporting patient percutaneous coronary intervention radiation-dose data were fully within the international guidance levels. Data from 51 patients who underwent multiple therapeutic procedures (median 2–3) indicated a total KAP reaching 995 Gy.cm2 (range 10.1–995) and CD 15.1 Gy (range 0.4–15.1), stressing the importance of dose monitoring and optimisation. Conclusions There is a need for interventional cardiology societies to play an active role in training actions and implementation of radiation protection. PMID:27325974

  3. Differential patterns, trends and hotspots of road traffic injuries on different road networks in Vellore district, southern India.

    PubMed

    Mohan, Venkata Raghava; Sarkar, Rajiv; Abraham, Vinod Joseph; Balraj, Vinohar; Naumova, Elena N

    2015-03-01

    To describe spatial and temporal profiles of Road Traffic Injuries (RTIs) on different road networks in Vellore district of southern India. Using the information in the police maintained First Information Reports (FIRs), daily time series of RTI counts were created and temporal characteristics were analysed with respect to the vehicle, road types and time of the day for the period January 2005 to May 2007. Daily incidence and trend of RTIs were estimated using a Poisson regression analysis. Of the reported 3262 RTIs, 52% had occurred on the National Highway (NH). The overall RTI rate on the NH was 8.8/100 000 vehicles per day with significantly higher pedestrian involvement. The mean numbers of RTIs were significantly higher on weekends. Thirteen percentage of all RTIs were associated with fatalities. Hotspots are major town junctions, and RTI rates differ over different stretches of the NH. In India, FIRs form a valuable source of RTI information. Information on different vehicle profile, RTI patterns, and their spatial and temporal trends can be used by administrators to devise effective strategies for RTI prevention by concentrating on the high-risk areas, thereby optimising the use of available personnel and resources. © 2014 John Wiley & Sons Ltd.

  4. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    NASA Astrophysics Data System (ADS)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  5. Multivalent Nanoparticle Networks Enable Point of Care Detection of Human Phospholipase-A2 in Serum

    PubMed Central

    Burnapp, Mark; Bentham, Andrew; Hillier, David; Zabron, Abigail; Khan, Shahid; Tyreman, Matthew; Stevens, Molly M.

    2017-01-01

    A rapid and highly sensitive point of care (PoC) lateral flow assay for phospholipase-A2 (PLA2) is demonstrated in serum through the enzyme-triggered release of a new class of biotinylated multi-armed polymers from a liposome substrate. Signal from the enzyme activity is generated by the adhesion of polystreptavidin coated gold nanoparticle networks to the lateral flow device, which leads to the appearance of a red test line due to the localised surface plasmon resonance (LSPR) effect of the gold. The use of a liposome as the enzyme substrate and multivalent linkers to link the nanoparticles leads to amplification of the signal as the cleavage of a small amount of lipids is able to release a large amount of polymer linker and adhesion of an even larger amount of gold nanoparticles. By optimising the molecular weight and multivalency of these biotinylated polymer linkers the sensitivity of the device can be tuned to enable naked-eye detection of 1 nM human-PLA2 in serum within 10 minutes. This high sensitivity enabled the correct diagnosis of pancreatitis in diseased clinical samples against a set of healthy controls using PLA2 activity in a point of care device for the first time. PMID:25756526

  6. Optimising the use of observational electronic health record data: Current issues, evolving opportunities, strategies and scope for collaboration.

    PubMed

    Liaw, Siaw-Teng; Powell-Davies, Gawaine; Pearce, Christopher; Britt, Helena; McGlynn, Lisa; Harris, Mark F

    2016-03-01

    With increasing computerisation in general practice, national primary care networks are mooted as sources of data for health services and population health research and planning. Existing data collection programs - MedicinesInsight, Improvement Foundation, Bettering the Evaluation and Care of Health (BEACH) - vary in purpose, governance, methodologies and tools. General practitioners (GPs) have significant roles as collectors, managers and users of electronic health record (EHR) data. They need to understand the challenges to their clinical and managerial roles and responsibilities. The aim of this article is to examine the primary and secondary use of EHR data, identify challenges, discuss solutions and explore directions. Representatives from existing programs, Medicare Locals, Local Health Districts and research networks held workshops on the scope, challenges and approaches to the quality and use of EHR data. Challenges included data quality, interoperability, fragmented governance, proprietary software, transparency, sustainability, competing ethical and privacy perspectives, and cognitive load on patients and clinicians. Proposed solutions included effective change management; transparent governance and management of intellectual property, data quality, security, ethical access, and privacy; common data models, metadata and tools; and patient/community engagement. Collaboration and common approaches to tools, platforms and governance are needed. Processes and structures must be transparent and acceptable to GPs.

  7. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  8. Acoustic Resonator Optimisation for Airborne Particle Manipulation

    NASA Astrophysics Data System (ADS)

    Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian

    Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.

  9. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  10. Optimisation of process parameters on thin shell part using response surface methodology (RSM) and genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.

  11. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  12. Global Topology Optimisation

    DTIC Science & Technology

    2016-10-31

    statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally

  13. Sentient Structures: Optimising Sensor Layouts for Direct Measurement of Discrete Variables

    DTIC Science & Technology

    2008-11-01

    1 Sentient Structures Optimising Sensor Layouts for Direct Measurement of Discrete Variables Report to US Air Force...TITLE AND SUBTITLE Sentient Structures 5a. CONTRACT NUMBER FA48690714045 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Donald Price...optimal sensor placements is an important requirement for the development of sentient structures. An optimal sensor layout is attained when a limited

  14. An Optimisation Procedure for the Conceptual Analysis of Different Aerodynamic Configurations

    DTIC Science & Technology

    2000-06-01

    G. Lombardi, G. Mengali Department of Aerospace Engineering , University of Pisa Via Diotisalvi 2, 56126 PISA, Italy F. Beux Scuola Normale Superiore...obtain engines , gears and various systems; their weights and centre configurations with improved performances with respect to a of gravity positions...design parameters have been arranged for The optimisation process includes the following steps: cruise: payload, velocity, range, cruise height, engine

  15. A simulation and optimisation procedure to model daily suppression resource transfers during a fire season in Colorado

    Treesearch

    Yu Wei; Erin J. Belval; Matthew P. Thompson; Dave E. Calkin; Crystal S. Stonesifer

    2016-01-01

    Sharing fire engines and crews between fire suppression dispatch zones may help improve the utilisation of fire suppression resources. Using the Resource Ordering and Status System, the Predictive Services’ Fire Potential Outlooks and the Rocky Mountain Region Preparedness Levels from 2010 to 2013, we tested a simulation and optimisation procedure to transfer crews and...

  16. Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context

    NASA Astrophysics Data System (ADS)

    Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian

    2016-05-01

    The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.

  17. Disease activity-guided dose optimisation of adalimumab and etanercept is a cost-effective strategy compared with non-tapering tight control rheumatoid arthritis care: analyses of the DRESS study.

    PubMed

    Kievit, Wietske; van Herwaarden, Noortje; van den Hoogen, Frank Hj; van Vollenhoven, Ronald F; Bijlsma, Johannes Wj; van den Bemt, Bart Jf; van der Maas, Aatke; den Broeder, Alfons A

    2016-11-01

    A disease activity-guided dose optimisation strategy of adalimumab or etanercept (TNFi (tumour necrosis factor inhibitors)) has shown to be non-inferior in maintaining disease control in patients with rheumatoid arthritis (RA) compared with usual care. However, the cost-effectiveness of this strategy is still unknown. This is a preplanned cost-effectiveness analysis of the Dose REduction Strategy of Subcutaneous TNF inhibitors (DRESS) study, a randomised controlled, open-label, non-inferiority trial performed in two Dutch rheumatology outpatient clinics. Patients with low disease activity using TNF inhibitors were included. Total healthcare costs were measured and quality adjusted life years (QALY) were based on EQ5D utility scores. Decremental cost-effectiveness analyses were performed using bootstrap analyses; incremental net monetary benefit (iNMB) was used to express cost-effectiveness. 180 patients were included, and 121 were allocated to the dose optimisation strategy and 59 to control. The dose optimisation strategy resulted in a mean cost saving of -€12 280 (95 percentile -€10 502; -€14 104) per patient per 18 months. There is an 84% chance that the dose optimisation strategy results in a QALY loss with a mean QALY loss of -0.02 (-0.07 to 0.02). The decremental cost-effectiveness ratio (DCER) was €390 493 (€5 085 184; dominant) of savings per QALY lost. The mean iNMB was €10 467 (€6553-€14 037). Sensitivity analyses using 30% and 50% lower prices for TNFi remained cost-effective. Disease activity-guided dose optimisation of TNFi results in considerable cost savings while no relevant loss of quality of life was observed. When the minimal QALY loss is compensated with the upper limit of what society is willing to pay or accept in the Netherlands, the net savings are still high. NTR3216; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Multidetector CT radiation dose optimisation in adults: short- and long-term effects of a clinical audit.

    PubMed

    Tack, Denis; Jahnen, Andreas; Kohler, Sarah; Harpes, Nico; De Maertelaer, Viviane; Back, Carlo; Gevenois, Pierre Alain

    2014-01-01

    To report short- and long-term effects of an audit process intended to optimise the radiation dose from multidetector row computed tomography (MDCT). A survey of radiation dose from all eight MDCT departments in the state of Luxembourg performed in 2007 served as baseline, and involved the most frequently imaged regions (head, sinus, cervical spine, thorax, abdomen, and lumbar spine). CT dose index volume (CTDIvol), dose-length product per acquisition (DLP/acq), and DLP per examination (DLP/exa) were recorded, and their mean, median, 25th and 75th percentiles compared. In 2008, an audit conducted in each department helped to optimise doses. In 2009 and 2010, two further surveys evaluated the audit's impact on the dose delivered. Between 2007 and 2009, DLP/exa significantly decreased by 32-69 % for all regions (P < 0.001) except the lumbar spine (5 %, P = 0.455). Between 2009 and 2010, DLP/exa significantly decreased by 13-18 % for sinus, cervical and lumbar spine (P ranging from 0.016 to less than 0.001). Between 2007 and 2010, DLP/exa significantly decreased for all regions (18-75 %, P < 0.001). Collective dose decreased by 30 % and the 75th percentile (diagnostic reference level, DRL) by 20-78 %. The audit process resulted in long-lasting dose reduction, with DRLs reduced by 20-78 %, mean DLP/examination by 18-75 %, and collective dose by 30 %. • External support through clinical audit may optimise default parameters of routine CT. • Reduction of 75th percentiles used as reference diagnostic levels is 18-75 %. • The effect of this audit is sustainable over time. • Dose savings through optimisation can be added to those achievable through CT.

  19. Medical imaging dose optimisation from ground up: expert opinion of an international summit.

    PubMed

    Samei, Ehsan; Järvinen, Hannu; Kortesniemi, Mika; Simantirakis, George; Goh, Charles; Wallace, Anthony; Vano, Eliseo; Bejan, Adrian; Rehani, Madan; Vassileva, Jenia

    2018-05-17

    As in any medical intervention, there is either a known or an anticipated benefit to the patient from undergoing a medical imaging procedure. This benefit is generally significant, as demonstrated by the manner in which medical imaging has transformed clinical medicine. At the same time, when it comes to imaging that deploys ionising radiation, there is a potential associated risk from radiation. Radiation risk has been recognised as a key liability in the practice of medical imaging, creating a motivation for radiation dose optimisation. The level of radiation dose and risk in imaging varies but is generally low. Thus, from the epidemiological perspective, this makes the estimation of the precise level of associated risk highly uncertain. However, in spite of the low magnitude and high uncertainty of this risk, its possibility cannot easily be refuted. Therefore, given the moral obligation of healthcare providers, 'first, do no harm,' there is an ethical obligation to mitigate this risk. Precisely how to achieve this goal scientifically and practically within a coherent system has been an open question. To address this need, in 2016, the International Atomic Energy Agency (IAEA) organised a summit to clarify the role of Diagnostic Reference Levels to optimise imaging dose, summarised into an initial report (Järvinen et al 2017 Journal of Medical Imaging 4 031214). Through a consensus building exercise, the summit further concluded that the imaging optimisation goal goes beyond dose alone, and should include image quality as a means to include both the benefit and the safety of the exam. The present, second report details the deliberation of the summit on imaging optimisation.

  20. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  1. Predicting emergency coronary artery bypass graft following PCI: application of a computational model to refer patients to hospitals with and without onsite surgical backup

    PubMed Central

    Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S

    2015-01-01

    Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738

  2. Optimisation of asymmetric flow field-flow fractionation for the characterisation of nanoparticles in coated polydisperse TiO2 with applications in food and feed.

    PubMed

    Omar, J; Boix, A; Kerckhove, G; von Holst, C

    2016-12-01

    Titanium dioxide (TiO 2 ) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO 2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CF exp ) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min -1 ; DF, 0.4 ml min -1 ; Ft, 5 min; and CF exp , 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated.

  3. Optimisation of asymmetric flow field-flow fractionation for the characterisation of nanoparticles in coated polydisperse TiO2 with applications in food and feed

    PubMed Central

    Omar, J.; Boix, A.; Kerckhove, G.; von Holst, C.

    2016-01-01

    ABSTRACT Titanium dioxide (TiO2) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CFexp) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min–1; DF, 0.4 ml min–1; Ft, 5 min; and CFexp, 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated. PMID:27650879

  4. Biodegradation of free cyanide and subsequent utilisation of biodegradation by-products by Bacillus consortia: optimisation using response surface methodology.

    PubMed

    Mekuto, Lukhanyo; Ntwampe, Seteno Karabo Obed; Jackson, Vanessa Angela

    2015-07-01

    A mesophilic alkali-tolerant bacterial consortium belonging to the Bacillus genus was evaluated for its ability to biodegrade high free cyanide (CN(-)) concentration (up to 500 mg CN(-)/L), subsequent to the oxidation of the formed ammonium and nitrates in a continuous bioreactor system solely supplemented with whey waste. Furthermore, an optimisation study for successful cyanide biodegradation by this consortium was evaluated in batch bioreactors (BBs) using response surface methodology (RSM). The input variables, that is, pH, temperature and whey-waste concentration, were optimised using a numerical optimisation technique where the optimum conditions were found to be as follows: pH 9.88, temperature 33.60 °C and whey-waste concentration of 14.27 g/L, under which 206.53 mg CN(-)/L in 96 h can be biodegraded by the microbial species from an initial cyanide concentration of 500 mg CN(-)/L. Furthermore, using the optimised data, cyanide biodegradation in a continuous mode was evaluated in a dual-stage packed-bed bioreactor (PBB) connected in series to a pneumatic bioreactor system (PBS) used for simultaneous nitrification, including aerobic denitrification. The whey-supported Bacillus sp. culture was not inhibited by the free cyanide concentration of up to 500 mg CN(-)/L, with an overall degradation efficiency of ≥ 99 % with subsequent nitrification and aerobic denitrification of the formed ammonium and nitrates over a period of 80 days. This is the first study to report free cyanide biodegradation at concentrations of up to 500 mg CN(-)/L in a continuous system using whey waste as a microbial feedstock. The results showed that the process has the potential for the bioremediation of cyanide-containing wastewaters.

  5. The optimization of treatment and management of schizophrenia in Europe (OPTiMiSE) trial: rationale for its methodology and a review of the effectiveness of switching antipsychotics.

    PubMed

    Leucht, Stefan; Winter-van Rossum, Inge; Heres, Stephan; Arango, Celso; Fleischhacker, W Wolfgang; Glenthøj, Birte; Leboyer, Marion; Leweke, F Markus; Lewis, Shôn; McGuire, Phillip; Meyer-Lindenberg, Andreas; Rujescu, Dan; Kapur, Shitij; Kahn, René S; Sommer, Iris E

    2015-05-01

    Most of the 13 542 trials contained in the Cochrane Schizophrenia Group's register just tested the general efficacy of pharmacological or psychosocial interventions. Studies on the subsequent treatment steps, which are essential to guide clinicians, are largely missing. This knowledge gap leaves important questions unanswered. For example, when a first antipsychotic failed, is switching to another drug effective? And when should we use clozapine? The aim of this article is to review the efficacy of switching antipsychotics in case of nonresponse. We also present the European Commission sponsored "Optimization of Treatment and Management of Schizophrenia in Europe" (OPTiMiSE) trial which aims to provide a treatment algorithm for patients with a first episode of schizophrenia. We searched Pubmed (October 29, 2014) for randomized controlled trials (RCTs) that examined switching the drug in nonresponders to another antipsychotic. We described important methodological choices of the OPTiMiSE trial. We found 10 RCTs on switching antipsychotic drugs. No trial was conclusive and none was concerned with first-episode schizophrenia. In OPTiMiSE, 500 first episode patients are treated with amisulpride for 4 weeks, followed by a 6-week double-blind RCT comparing continuation of amisulpride with switching to olanzapine and ultimately a 12-week clozapine treatment in nonremitters. A subsequent 1-year RCT validates psychosocial interventions to enhance adherence. Current literature fails to provide basic guidance for the pharmacological treatment of schizophrenia. The OPTiMiSE trial is expected to provide a basis for clinical guidelines to treat patients with a first episode of schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  7. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  8. Engineering of a complex bone tissue model with endothelialised channels and capillary-like networks.

    PubMed

    Klotz, B J; Lim, K S; Chang, Y X; Soliman, B G; Pennings, I; Melchels, F P W; Woodfield, T B F; Rosenberg, A J; Malda, J; Gawlitta, D

    2018-05-30

    In engineering of tissue analogues, upscaling to clinically-relevant sized constructs remains a significant challenge. The successful integration of a vascular network throughout the engineered tissue is anticipated to overcome the lack of nutrient and oxygen supply to residing cells. This work aimed at developing a multiscale bone-tissue-specific vascularisation strategy. Engineering pre-vascularised bone leads to biological and fabrication dilemmas. To fabricate channels endowed with an endothelium and suitable for osteogenesis, rather stiff materials are preferable, while capillarisation requires soft matrices. To overcome this challenge, gelatine-methacryloyl hydrogels were tailored by changing the degree of functionalisation to allow for cell spreading within the hydrogel, while still enabling endothelialisation on the hydrogel surface. An additional challenge was the combination of the multiple required cell-types within one biomaterial, sharing the same culture medium. Consequently, a new medium composition was investigated that simultaneously allowed for endothelialisation, capillarisation and osteogenesis. Integrated multipotent mesenchymal stromal cells, which give rise to pericyte-like and osteogenic cells, and endothelial-colony-forming cells (ECFCs) which form capillaries and endothelium, were used. Based on the aforementioned optimisation, a construct of 8 × 8 × 3 mm, with a central channel of 600 µm in diameter, was engineered. In this construct, ECFCs covered the channel with endothelium and osteogenic cells resided in the hydrogel, adjacent to self-assembled capillary-like networks. This study showed the promise of engineering complex tissue constructs by means of human primary cells, paving the way for scaling-up and finally overcoming the challenge of engineering vascularised tissues.

  9. RF-Plasma Source Commissioning in Indian Negative Ion Facility

    NASA Astrophysics Data System (ADS)

    Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Gahlaut, A.; Soni, J.; Kumar, Sunil; Pandya, K.; Parmar, K. G.; Sonara, J.; Yadava, Ratnakar; Chakraborty, A. K.; Kraus, W.; Heinemann, B.; Riedl, R.; Obermayer, S.; Martens, C.; Franzen, P.; Fantz, U.

    2011-09-01

    The Indian program of the RF based negative ion source has started off with the commissioning of ROBIN, the inductively coupled RF based negative ion source facility under establishment at Institute for Plasma research (IPR), India. The facility is being developed under a technology transfer agreement with IPP Garching. It consists of a single RF driver based beam source (BATMAN replica) coupled to a 100 kW, 1 MHz RF generator with a self excited oscillator, through a matching network, for plasma production and ion extraction and acceleration. The delivery of the RF generator and the RF plasma source without the accelerator, has enabled initiation of plasma production experiments. The recent experimental campaign has established the matching circuit parameters that result in plasma production with density in the range of 0.5-1×1018/m3, at operational gas pressures ranging between 0.4-1 Pa. Various configurations of the matching network have been experimented upon to obtain a stable operation of the set up for RF powers ranging between 25-85 kW and pulse lengths ranging between 4-20 s. It has been observed that the range of the parameters of the matching circuit, over which the frequency of the power supply is stable, is narrow and further experiments with increased number of turns in the coil are in the pipeline to see if the range can be widened. In this paper, the description of the experimental system and the commissioning data related to the optimisation of the various parameters of the matching network, to obtain stable plasma of required density, are presented and discussed.

  10. Incorporation of perception-based information in robot learning using fuzzy reinforcement learning agents

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo

    2002-04-01

    Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.

  11. Easy-Going On-Spectrometer Optimisation of Phase Modulated Homonuclear Decoupling Sequences in Solid-State NMR

    NASA Astrophysics Data System (ADS)

    Grimminck, Dennis L. A. G.; Vasa, Suresh K.; Meerts, W. Leo; Kentgens, P. M.

    2011-06-01

    A global optimisation scheme for phase modulated proton homonuclear decoupling sequences in solid-state NMR is presented. Phase modulations, parameterised by DUMBO Fourier coefficients, were optimized using a Covariance Matrix Adaptation Evolution Strategies algorithm. Our method, denoted EASY-GOING homonuclear decoupling, starts with featureless spectra and optimises proton-proton decoupling, during either proton or carbon signal detection. On the one hand, our solutions closely resemble (e)DUMBO for moderate sample spinning frequencies and medium radio-frequency (rf) field strengths. On the other hand, the EASY-GOING approach resulted in a superior solution, achieving significantly better resolved proton spectra at very high 680 kHz rf field strength. N. Hansen, and A. Ostermeier. Evol. Comput. 9 (2001) 159-195 B. Elena, G. de Paepe, L. Emsley. Chem. Phys. Lett. 398 (2004) 532-538

  12. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  13. Process optimisation of microwave-assisted extraction of peony ( Paeonia suffruticosa Andr .) seed oil using hexane-ethanol mixture and its characterisation

    Treesearch

    Xiaoli Sun; Wengang Li; Jian Li; Yuangang Zu; Chung-Yun Hse; Jiulong Xie; Xiuhua Zhao

    2016-01-01

    Ethanol and hexane mixture agent microwave-assisted extraction (MAE) method was conducted to extract peony (Paeonia suffruticosa Andr.) seed oil (PSO). The aim of the study was to optimise the extraction for both yield and energy consumption in mixture agent MAE. The highest oil yield (34.49%) and lowest unit energy consumption (14 125.4 J g -1)...

  14. Optimizing Operational Physical Fitness (Optimisation de L’Aptitude Physique Operationnelle)

    DTIC Science & Technology

    2009-01-01

    NORTH ATLANTIC TREATY ORGANISATION RESEARCH AND TECHNOLOGY ORGANISATION AC/323(HFM-080)TP/200 www.rto.nato.int RTO TECHNICAL REPORT TR... RESEARCH AND TECHNOLOGY ORGANISATION AC/323(HFM-080)TP/200 www.rto.nato.int RTO TECHNICAL REPORT TR-HFM-080 Optimizing Operational Physical...Fitness (Optimisation de l’aptitude physique opérationnelle) Final Report of Task Group 019. ii RTO-TR-HFM-080 The Research and

  15. Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations

    NASA Astrophysics Data System (ADS)

    Linders, Viktor; Kupiainen, Marco; Nordström, Jan

    2017-07-01

    We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.

  16. The robust model predictive control based on mixed H2/H∞ approach with separated performance formulations and its ISpS analysis

    NASA Astrophysics Data System (ADS)

    Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong

    2017-12-01

    In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.

  17. Cost optimisation and minimisation of the environmental impact through life cycle analysis of the waste water treatment plant of Bree (Belgium).

    PubMed

    De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B

    2011-01-01

    An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.

  18. Optimising design, operation and energy consumption of biological aerated filters (BAF) for nitrogen removal of municipal wastewater.

    PubMed

    Rother, E; Cornel, P

    2004-01-01

    The Biofiltration process in wastewater treatment combines filtration and biological processes in one reactor. In Europe it is meanwhile an accepted technology in advanced wastewater treatment, whenever space is scarce and a virtually suspended solids-free effluent is demanded. Although more than 500 plants are in operation world-wide there is still a lack of published operational experiences to help planners and operators to identify potentials for optimisation, e.g. energy consumption or the vulnerability against peakloads. Examples from pilot trials are given how the nitrification and denitrification can be optimised. Nitrification can be quickly increased by adjusting DO content of the water. Furthermore carrier materials like zeolites can store surplus ammonia during peak loads and release afterwards. Pre-denitrification in biofilters is normally limited by the amount of easily degradable organic substrate, resulting in relatively high requirements for external carbon. The combination of pre-DN, N and post-DN filters is much more advisable for most municipal wastewaters, because the recycle rate can be reduced and external carbon can be saved. Exemplarily it is shown for a full scale preanoxic-DN/N/postanoxic-DN plant of 130,000 p.e. how 15% energy could be saved by optimising internal recycling and some control strategies.

  19. Optimised in vitro applicable loads for the simulation of lateral bending in the lumbar spine.

    PubMed

    Dreischarf, Marcel; Rohlmann, Antonius; Bergmann, Georg; Zander, Thomas

    2012-07-01

    In in vitro studies of the lumbar spine simplified loading modes (compressive follower force, pure moment) are usually employed to simulate the standard load cases flexion-extension, axial rotation and lateral bending of the upper body. However, the magnitudes of these loads vary widely in the literature. Thus the results of current studies may lead to unrealistic values and are hardly comparable. It is still unknown which load magnitudes lead to a realistic simulation of maximum lateral bending. A validated finite element model of the lumbar spine was used in an optimisation study to determine which magnitudes of the compressive follower force and bending moment deliver results that fit best with averaged in vivo data. The best agreement with averaged in vivo measured data was found for a compressive follower force of 700 N and a lateral bending moment of 7.8 Nm. These results show that loading modes that differ strongly from the optimised one may not realistically simulate maximum lateral bending. The simplified but in vitro applicable loading cannot perfectly mimic the in vivo situation. However, the optimised magnitudes are those which agree best with averaged in vivo measured data. Its consequent application would lead to a better comparability of different investigations. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Electroconvulsive therapy stimulus titration: Not all it seems.

    PubMed

    Rosenman, Stephen J

    2018-05-01

    To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.

  1. Application of statistical experimental design for optimisation of bioinsecticides production by sporeless Bacillus thuringiensis strain on cheap medium.

    PubMed

    Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil

    2013-01-01

    In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L(-1) starch, 30 g L(-1) soya bean and 9 g L(-1) sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.

  2. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis

    PubMed Central

    Waterfall, Christy M.; Cobb, Benjamin D.

    2001-01-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702

  3. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.

    PubMed

    Waterfall, C M; Cobb, B D

    2001-12-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.

  4. Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.

    PubMed

    Kramar, A; Turk, S; Vrecer, F

    2003-04-30

    The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.

  5. Application of statistical experimental design for optimisation of bioinsecticides production by sporeless Bacillus thuringiensis strain on cheap medium

    PubMed Central

    Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil

    2013-01-01

    In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L−1 starch, 30 g L−1 soya bean and 9 g L−1 sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view. PMID:24516462

  6. Optimising resolution for a preparative separation of Chinese herbal medicine using a surrogate model sample system.

    PubMed

    Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian

    2009-06-26

    This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.

  7. The impact of the Major Trauma Network: will trauma units continue to treat complex foot and ankle injuries?

    PubMed

    Hay-David, A G C; Clint, S A; Brown, R R

    2014-12-01

    April 1st 2012 saw the introduction of National Trauma Networks in England. The aim to optimise the management of major trauma. Patients with an ISS≥16 would be transferred to the regional Major Trauma Centre (level 1). Our premise was that trauma units (level 2) would no longer manage complex foot and ankle injuries thereby obviating the need for a foot and ankle specialist service. Retrospective analysis of the epidemiology of foot and ankle injuries, using the Gloucestershire trauma database, from a trauma unit with a population of 750,000. Rates of open fractures, complex foot and ankle injuries and requirement for stabilisation with external fixation were reviewed before and after the introduction of the regional Trauma Network. Secondly, using the Trauma Audit & Research Network (TARN) database, all foot and ankle injuries triaged to the regional Major Trauma Centre (MTC) were reviewed. Incidence of open foot and ankle injuries was 2.9 per 100,000 per year. There were 5.1% open injuries before the network and 3.2% after (p>0.05). Frequency of complex foot and ankle injuries was 4.2% before and 7.5% after the network commenced, showing no significant change. There was no statistically significant change in the numbers of patients with complex foot and ankle injuries treated by application of external fixators. Analysis of TARN data revealed that only 18% of patients with foot and ankle injuries taken to the MTC had an ISS≥16. The majority of these patients were identified as requiring plastic surgical intervention for open fractures (69%) or were polytrauma patients (43%). Only 4.5% of patients had isolated, closed foot and ankle injuries. We found that at the trauma unit there was no decrease in the numbers of complex foot and ankle injuries, open fractures, or the applications of external fixators, following the introduction of the Trauma Network. These patients will continue to attend trauma units as they usually have an ISS<16. Our findings suggest that there is still a need for foot and ankle specialists at trauma units, in order to manage patients with complex foot and ankle injuries. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  9. Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.

    Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less

  10. Pre-operative optimisation of lung function

    PubMed Central

    Azhar, Naheed

    2015-01-01

    The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function. PMID:26556913

  11. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  12. Optimisation of surfactant decontamination and pre-treatment of waste chicken feathers by using response surface methodology.

    PubMed

    Tesfaye, Tamrat; Sithole, Bruce; Ramjugernath, Deresh; Ndlela, Luyanda

    2018-02-01

    Commercially processed, untreated chicken feathers are biologically hazardous due to the presence of blood-borne pathogens. Prior to valorisation, it is crucial that they are decontaminated to remove the microbial contamination. The present study focuses on evaluating the best technologies to decontaminate and pre-treat chicken feathers in order to make them suitable for valorisation. Waste chicken feathers were washed with three surfactants (sodium dodecyl sulphate) dimethyl dioctadecyl ammonium chloride, and polyoxyethylene (40) stearate) using statistically designed experiments. Process conditions were optimised using response surface methodology with a Box-Behnken experimental design. The data were compared with decontamination using an autoclave. Under optimised conditions, the microbial counts of the decontaminated and pre-treated chicken feathers were significantly reduced making them safe for handling and use for valorisation applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Biomass supply chain optimisation for Organosolv-based biorefineries.

    PubMed

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems

    DOE PAGES

    Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.; ...

    2016-10-03

    Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less

  15. rPM6 parameters for phosphorous and sulphur-containing open-shell molecules

    NASA Astrophysics Data System (ADS)

    Saito, Toru; Takano, Yu

    2018-03-01

    In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.

  16. On the optimisation of the use of 3He in radiation portal monitors

    NASA Astrophysics Data System (ADS)

    Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet

    2013-02-01

    Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.

  17. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  18. Lap time simulation and design optimisation of a brushed DC electric motorcycle for the Isle of Man TT Zero Challenge

    NASA Astrophysics Data System (ADS)

    Dal Bianco, N.; Lot, R.; Matthys, K.

    2018-01-01

    This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.

  19. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    NASA Astrophysics Data System (ADS)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  20. Determination of volatile monophenols in beer using acetylation and headspace solid-phase microextraction in combination with gas chromatography and mass spectrometry.

    PubMed

    Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R

    2010-08-31

    Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.

Top