Sample records for simulation based optimisation

  1. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  2. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  3. Optimal integrated management of groundwater resources and irrigated agriculture in arid coastal regions

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Heck, V.

    2014-09-01

    Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.

  4. Photonic simulation of entanglement growth and engineering after a spin chain quench.

    PubMed

    Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio

    2017-11-17

    The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.

  5. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  6. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  7. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  8. Thermal energy and economic analysis of a PCM-enhanced household envelope considering different climate zones in Morocco

    NASA Astrophysics Data System (ADS)

    Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine

    2018-07-01

    This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.

  9. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  10. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  11. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    NASA Astrophysics Data System (ADS)

    Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.

    2014-05-01

    Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.

  12. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  13. Multi-objective optimisation and decision-making of space station logistics strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  14. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  15. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  16. The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.

    2012-04-01

    For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.

  17. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  18. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  19. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  20. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  1. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  2. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  3. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  4. A method to incorporate the effect of beam quality on image noise in a digitally reconstructed radiograph (DRR) based computer simulation for optimisation of digital radiography

    NASA Astrophysics Data System (ADS)

    Moore, Craig S.; Wood, Tim J.; Saunderson, John R.; Beavis, Andrew W.

    2017-09-01

    The use of computer simulated digital x-radiographs for optimisation purposes has become widespread in recent years. To make these optimisation investigations effective, it is vital simulated radiographs contain accurate anatomical and system noise. Computer algorithms that simulate radiographs based solely on the incident detector x-ray intensity (‘dose’) have been reported extensively in the literature. However, while it has been established for digital mammography that x-ray beam quality is an important factor when modelling noise in simulated images there are no such studies for diagnostic imaging of the chest, abdomen and pelvis. This study investigates the influence of beam quality on image noise in a digital radiography (DR) imaging system, and incorporates these effects into a digitally reconstructed radiograph (DRR) computer simulator. Image noise was measured on a real DR imaging system as a function of dose (absorbed energy) over a range of clinically relevant beam qualities. Simulated ‘absorbed energy’ and ‘beam quality’ DRRs were then created for each patient and tube voltage under investigation. Simulated noise images, corrected for dose and beam quality, were subsequently produced from the absorbed energy and beam quality DRRs, using the measured noise, absorbed energy and beam quality relationships. The noise images were superimposed onto the noiseless absorbed energy DRRs to create the final images. Signal-to-noise measurements in simulated chest, abdomen and spine images were within 10% of the corresponding measurements in real images. This compares favourably to our previous algorithm where images corrected for dose only were all within 20%.

  5. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  6. Use of a genetic algorithm to improve the rail profile on Stockholm underground

    NASA Astrophysics Data System (ADS)

    Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon

    2010-12-01

    In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.

  7. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  8. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  9. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  10. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  11. Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping

    NASA Astrophysics Data System (ADS)

    Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan

    2016-12-01

    This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.

  12. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  13. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  14. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  15. Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST

    NASA Astrophysics Data System (ADS)

    Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2018-04-01

    We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.

  16. CMOS analogue amplifier circuits optimisation using hybrid backtracking search algorithm with differential evolution

    NASA Astrophysics Data System (ADS)

    Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2016-07-01

    This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.

  17. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  18. Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.

    PubMed

    Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin

    2014-02-01

    Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  20. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  1. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

  2. Mixing formula for tissue-mimicking silicone phantoms in the near infrared

    NASA Astrophysics Data System (ADS)

    Böcklin, C.; Baumann, D.; Stuker, F.; Fröhlich, Jürg

    2015-03-01

    The knowledge of accurate optical parameters of materials is paramount in biomedical optics applications and numerical simulations of such systems. Phantom materials with variable but predefined parameters are needed to optimise these systems. An optimised integrating sphere measurement setup and reconstruction algorithm are presented in this work to determine the optical properties of silicone rubber based phantoms whose absorption and scattering properties are altered with TiO2 and carbon black particles. A mixing formula for all constituents is derived and allows to create phantoms with predefined optical properties.

  3. Simulations of multi-contrast x-ray imaging using near-field speckles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  4. Optimising the Parallelisation of OpenFOAM Simulations

    DTIC Science & Technology

    2014-06-01

    UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI

  5. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  6. Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems

    NASA Astrophysics Data System (ADS)

    Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.

    2015-05-01

    Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.

  7. Close packing in curved space by simulated annealing

    NASA Astrophysics Data System (ADS)

    Wille, L. T.

    1987-12-01

    The problem of packing spheres of a maximum radius on the surface of a four-dimensional hypersphere is considered. It is shown how near-optimal solutions can be obtained by packing soft spheres, modelled as classical particles interacting under an inverse power potential, followed by a subsequent hardening of the interaction. In order to avoid trapping in high-lying local minima, the simulated annealing method is used to optimise the soft-sphere packing. Several improvements over other work (based on local optimisation of random initial configurations of hard spheres) have been found. The freezing behaviour of this system is discussed as a function of particle number, softness of the potential and cooling rate. Apart from their geometric interest, these results are useful in the study of topological frustration, metallic glasses and quasicrystals.

  8. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  9. Optimisation potential for a SBR plant based upon integrated modelling for dry and wet weather conditions.

    PubMed

    Rönner-Holm, S G E; Kaufmann Alves, I; Steinmetz, H; Holm, N C

    2009-01-01

    Integrated dynamic simulation analysis of a full-scale municipal sequential batch reactor (SBR) wastewater treatment plant (WWTP) was performed using the KOSMO pollution load simulation model for the combined sewer system (CSS) and the ASM3 + EAWAG-BioP model for the WWTP. Various optimising strategies for dry and storm weather conditions were developed to raise the purification and hydraulic performance and to reduce operation costs based on simulation studies with the calibrated WWTP model. The implementation of some strategies on the plant led to lower effluent values and an average annual saving of 49,000 euro including sewage tax, which is 22% of the total running costs. Dynamic simulation analysis of CSS for an increased WWTP influent over a period of one year showed high potentials for reducing combined sewer overflow (CSO) volume by 18-27% and CSO loads for COD by 22%, NH(4)-N and P(total) by 33%. In addition, the SBR WWTP could easily handle much higher influents without exceeding the monitoring values. During the integrated simulation of representative storm events, the total emission load for COD dropped to 90%, the sewer system emitted 47% less, whereas the pollution load in the WWTP effluent increased to only 14% with 2% higher running costs.

  10. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  11. pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data

    NASA Astrophysics Data System (ADS)

    Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.

    The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.

  12. A simulation-optimization model for effective water resources management in the coastal zone

    NASA Astrophysics Data System (ADS)

    Spanoudaki, Katerina; Kampanis, Nikolaos

    2015-04-01

    Coastal areas are the most densely-populated areas in the world. Consequently water demand is high, posing great pressure on fresh water resources. Climatic change and its direct impacts on meteorological variables (e.g. precipitation) and indirect impact on sea level rise, as well as anthropogenic pressures (e.g. groundwater abstraction), are strong drivers causing groundwater salinisation and subsequently affecting coastal wetlands salinity with adverse effects on the corresponding ecosystems. Coastal zones are a difficult hydrologic environment to represent with a mathematical model due to the large number of contributing hydrologic processes and variable-density flow conditions. Simulation of sea level rise and tidal effects on aquifer salinisation and accurate prediction of interactions between coastal waters, groundwater and neighbouring wetlands requires the use of integrated surface water-groundwater mathematical models. In the past few decades several computer codes have been developed to simulate coupled surface and groundwater flow. However, most integrated surface water-groundwater models are based on the assumption of constant fluid density and therefore their applicability to coastal regions is questionable. Thus, most of the existing codes are not well-suited to represent surface water-groundwater interactions in coastal areas. To this end, the 3D integrated surface water-groundwater model IRENE (Spanoudaki et al., 2009; Spanoudaki, 2010) has been modified in order to simulate surface water-groundwater flow and salinity interactions in the coastal zone. IRENE, in its original form, couples the 3D shallow water equations to the equations describing 3D saturated groundwater flow of constant density. A semi-implicit finite difference scheme is used to solve the surface water flow equations, while a fully implicit finite difference scheme is used for the groundwater equations. Pollution interactions are simulated by coupling the advection-diffusion equation describing the fate and transport of contaminants introduced in a 3D turbulent flow field to the partial differential equation describing the fate and transport of contaminants in 3D transient groundwater flow systems. The model has been further developed to include the effects of density variations on surface water and groundwater flow, while the already built-in solute transport capabilities are used to simulate salinity interactions. The refined model is based on the finite volume method using a cell-centred structured grid, providing thus flexibility and accuracy in simulating irregular boundary geometries. For addressing water resources management problems, simulation models are usually externally coupled with optimisation-based management models. However this usually requires a very large number of iterations between the optimisation and simulation models in order to obtain the optimal management solution. As an alternative approach, for improved computational efficiency, an Artificial Neural Network (ANN) is trained as an approximate simulator of IRENE. The trained ANN is then linked to a Genetic Algorithm (GA) based optimisation model for managing salinisation problems in the coastal zone. The linked simulation-optimisation model is applied to a hypothetical study area for performance evaluation. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). References Spanoudaki, K., Stamou, A.I. and Nanou-Giannarou, A. (2009). Development and verification of a 3-D integrated surface water-groundwater model. Journal of Hydrology, 375 (3-4), 410-427. Spanoudaki, K. (2010). Integrated numerical modelling of surface water groundwater systems (in Greek). Ph.D. Thesis, National Technical University of Athens, Greece.

  13. Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning

    DOE PAGES

    Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...

    2016-04-26

    A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.

  14. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Xian-Bing; Gao, X. Z.; Lu, Lihua; Liu, Yu; Zhang, Hengzhen

    2016-07-01

    A new bio-inspired algorithm, namely Bird Swarm Algorithm (BSA), is proposed for solving optimisation applications. BSA is based on the swarm intelligence extracted from the social behaviours and social interactions in bird swarms. Birds mainly have three kinds of behaviours: foraging behaviour, vigilance behaviour and flight behaviour. Birds may forage for food and escape from the predators by the social interactions to obtain a high chance of survival. By modelling these social behaviours, social interactions and the related swarm intelligence, four search strategies associated with five simplified rules are formulated in BSA. Simulations and comparisons based on eighteen benchmark problems demonstrate the effectiveness, superiority and stability of BSA. Some proposals for future research about BSA are also discussed.

  15. A simulation and optimisation procedure to model daily suppression resource transfers during a fire season in Colorado

    Treesearch

    Yu Wei; Erin J. Belval; Matthew P. Thompson; Dave E. Calkin; Crystal S. Stonesifer

    2016-01-01

    Sharing fire engines and crews between fire suppression dispatch zones may help improve the utilisation of fire suppression resources. Using the Resource Ordering and Status System, the Predictive Services’ Fire Potential Outlooks and the Rocky Mountain Region Preparedness Levels from 2010 to 2013, we tested a simulation and optimisation procedure to transfer crews and...

  16. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  17. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  18. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    NASA Astrophysics Data System (ADS)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  19. Optimisation of confinement in a fusion reactor using a nonlinear turbulence model

    NASA Astrophysics Data System (ADS)

    Highcock, E. G.; Mandell, N. R.; Barnes, M.

    2018-04-01

    The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.

  20. Optimised in vitro applicable loads for the simulation of lateral bending in the lumbar spine.

    PubMed

    Dreischarf, Marcel; Rohlmann, Antonius; Bergmann, Georg; Zander, Thomas

    2012-07-01

    In in vitro studies of the lumbar spine simplified loading modes (compressive follower force, pure moment) are usually employed to simulate the standard load cases flexion-extension, axial rotation and lateral bending of the upper body. However, the magnitudes of these loads vary widely in the literature. Thus the results of current studies may lead to unrealistic values and are hardly comparable. It is still unknown which load magnitudes lead to a realistic simulation of maximum lateral bending. A validated finite element model of the lumbar spine was used in an optimisation study to determine which magnitudes of the compressive follower force and bending moment deliver results that fit best with averaged in vivo data. The best agreement with averaged in vivo measured data was found for a compressive follower force of 700 N and a lateral bending moment of 7.8 Nm. These results show that loading modes that differ strongly from the optimised one may not realistically simulate maximum lateral bending. The simplified but in vitro applicable loading cannot perfectly mimic the in vivo situation. However, the optimised magnitudes are those which agree best with averaged in vivo measured data. Its consequent application would lead to a better comparability of different investigations. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Battery Cell Balancing Optimisation for Battery Management System

    NASA Astrophysics Data System (ADS)

    Yusof, M. S.; Toha, S. F.; Kamisan, N. A.; Hashim, N. N. W. N.; Abdullah, M. A.

    2017-03-01

    Battery cell balancing in every electrical component such as home electronic equipment and electric vehicle is very important to extend battery run time which is simplified known as battery life. The underlying solution to equalize the balance of cell voltage and SOC between the cells when they are in complete charge. In order to control and extend the battery life, the battery cell balancing is design and manipulated in such way as well as shorten the charging process. Active and passive cell balancing strategies as a unique hallmark enables the balancing of the battery with the excellent performances configuration so that the charging process will be faster. The experimental and simulation covers an analysis of how fast the battery can balance for certain time. The simulation based analysis is conducted to certify the use of optimisation in active or passive cell balancing to extend battery life for long periods of time.

  2. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  3. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    NASA Astrophysics Data System (ADS)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  4. Lap time simulation and design optimisation of a brushed DC electric motorcycle for the Isle of Man TT Zero Challenge

    NASA Astrophysics Data System (ADS)

    Dal Bianco, N.; Lot, R.; Matthys, K.

    2018-01-01

    This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.

  5. Techniques d'inspection par ondes guidees ultrasonores d'assemblages brases dans des reacteurs aeronautiques =

    NASA Astrophysics Data System (ADS)

    Comot, Pierre

    L'industrie aeronautique, cherche a etudier la possibilite d'utiliser de maniere structurelle des joints brases, dans une optique de reduction de poids et de cout. Le developpement d'une methode d'evaluation rapide, fiable et peu couteuse pour evaluer l'integrite structurelle des joints apparait donc indispensable. La resistance mecanique d'un joint brase dependant principalement de la quantite de phase fragile dans sa microstructure. Les ondes guidees ultrasonores permettent de detecter ce type de phase lorsqu'elles sont couplees a une mesure spatio-temporelle. De plus la nature de ce type d'ondes permet l'inspection de joints ayant des formes complexes. Ce memoire se concentre donc sur le developpement d'une technique basee sur l'utilisation d'ondes guidees ultrasonores pour l'inspection de joints brases a recouvrement d'Inconel 625 avec comme metal d'apport du BNi-2. Dans un premiers temps un modele elements finis du joint a ete utilise pour simuler la propagation des ultrasons et optimiser les parametres d'inspection, la simulation a permis egalement de demontrer la faisabilite de la technique pour la detection de la quantite de phase fragile dans ce type de joints. Les parametres optimises sont la forme de signal d'excitation, sa frequence centrale et la direction d'excitation. Les simulations ont montre que l'energie de l'onde ultrasonore transmise a travers le joint aussi bien que celle reflechie, toutes deux extraites des courbes de dispersion, etaient proportionnelles a la quantite de phase fragile presente dans le joint et donc cette methode permet d'identifier la presence ou non d'une phase fragile dans ce type de joint. Ensuite des experimentations ont ete menees sur trois echantillons typiques presentant differentes quantites de phase fragile dans le joint, pour obtenir ce type d'echantillons differents temps de brasage ont ete utilises (1, 60 et 180 min). Pour cela un banc d'essai automatise a ete developpe permettant d'effectuer une analyse similaire a celle utilisee en simulation. Les parametres experimentaux ayant ete choisis en accord avec l'optimisation effectuee lors des simulations et apres une premiere optimisation du procede experimental. Finalement les resultats experimentaux confirment les resultats obtenus en simulation, et demontrent le potentiel de la methode developpee.

  6. Analysis of dynamic cerebral autoregulation using an ARX model based on arterial blood pressure and middle cerebral artery velocity simulation.

    PubMed

    Liu, Y; Allen, R

    2002-09-01

    The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.

  7. An integrated modelling and multicriteria analysis approach to managing nitrate diffuse pollution: 2. A case study for a chalk catchment in England.

    PubMed

    Koo, B K; O'Connell, P E

    2006-04-01

    The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.

  8. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    NASA Astrophysics Data System (ADS)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  9. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    NASA Astrophysics Data System (ADS)

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  10. Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM) and genetic algorithm method (GA)

    NASA Astrophysics Data System (ADS)

    Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.

  11. On the analysis of using 3-coil wireless power transfer system in retinal prosthesis.

    PubMed

    Bai, Shun; Skafidas, Stan

    2014-01-01

    Designing a wireless power transmission system(WPTS) using inductive coupling has been investigated extensively in the last decade. Depending on the different configurations of the coupling system, there have been various designing methods to optimise the power transmission efficiency based on the tuning circuitry, quality factor optimisation and geometrical configuration. Recently, a 3-coil WPTS was introduced in retinal prosthesis to overcome the low power transferring efficiency due to low coupling coefficient. Here we present a method to analyse this 3-coil WPTS using the S-parameters to directly obtain maximum achievable power transferring efficiency. Through electromagnetic simulation, we brought a question on the condition of improvement using 3-coil WPTS in powering retinal prosthesis.

  12. Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)

    NASA Astrophysics Data System (ADS)

    Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan

    2010-05-01

    The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.

  13. Twist limits for late twisting double somersaults on trampoline.

    PubMed

    Yeadon, M R; Hiley, M J

    2017-06-14

    An angle-driven computer simulation model of aerial movement was used to determine the maximum amount of twist that could be produced in the second somersault of a double somersault on trampoline using asymmetrical movements of the arms and hips. Lower bounds were placed on the durations of arm and hip angle changes based on performances of a world trampoline champion whose inertia parameters were used in the simulations. The limiting movements were identified as the largest possible odd number of half twists for forward somersaulting takeoffs and even number of half twists for backward takeoffs. Simulations of these two limiting movements were found using simulated annealing optimisation to produce the required amounts of somersault, tilt and twist at landing after a flight time of 2.0s. Additional optimisations were then run to seek solutions with the arms less adducted during the twisting phase. It was found that 3½ twists could be produced in the second somersault of a forward piked double somersault with arms abducted 8° from full adduction during the twisting phase and that three twists could be produced in the second somersault of a backward straight double somersault with arms fully adducted to the body. These two movements are at the limits of performance for elite trampolinists. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Analysis and optimisation of the convergence behaviour of the single channel digital tanlock loop

    NASA Astrophysics Data System (ADS)

    Al-Kharji Al-Ali, Omar; Anani, Nader; Al-Araji, Saleh; Al-Qutayri, Mahmoud

    2013-09-01

    The mathematical analysis of the convergence behaviour of the first-order single channel digital tanlock loop (SC-DTL) is presented. This article also describes a novel technique that allows controlling the convergence speed of the loop, i.e. the time taken by the phase-error to reach its steady-state value, by using a specialised controller unit. The controller is used to adjust the convergence speed so as to selectively optimise a given performance parameter of the loop. For instance, the controller may be used to speed up the convergence in order to increase the lock range and improve the acquisition speed. However, since increasing the lock range can degrade the noise immunity of the system, in a noisy environment the controller can slow down the convergence speed until locking is achieved. Once the system is in lock, the convergence speed can be increased to improve the acquisition speed. The performance of the SC-DTL system was assessed against similar arctan-based loops and the results demonstrate the success of the controller in optimising the performance of the SC-DTL loop. The results of the system testing using MATLAB/Simulink simulation are presented. A prototype of the proposed system was implemented using a field programmable gate array module and the practical results are in good agreement with those obtained by simulation.

  15. The design of a Nai(Tl) crystal in a system optimised for high-throughput and emergency measurement of iodine 131 in the human thyroid

    NASA Astrophysics Data System (ADS)

    Vrba, Tomas; Fojtik, Pavel

    2014-11-01

    In the case of an accidental release of 131I, a system for large-scale monitoring of the population for the radionuclide intake is needed. A monitoring system is required to be capable of measuring adult as well as child subjects across a wide range of ages. Such system has been developed by the National Radiation Protection Institute in Prague (NRPI) and the Evinet company (member of the Nuvia Group). This paper describes the optimisation of the NaI(Tl) detector chosen for this system. The design of the crystal was based on Monte Carlo (MC) simulations, and supported by literature. These simulations examined three different crystal shapes and several dimensions. Based on the MC study, two prototype detectors, with crystal diameters 80 and 73 mm, were manufactured and compared with the crystals having dimensions ∅45×40 mm used for thyroid measurement at NRPI and with a standard NaI(Tl) probe (∅76.2×76.2 mm). The detector with a crystal of 80 mm diameter gave the best results and was chosen for further production.

  16. On simulated annealing phase transitions in phylogeny reconstruction.

    PubMed

    Strobl, Maximilian A R; Barker, Daniel

    2016-08-01

    Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Optimisation of phase ratio in the triple jump using computer simulation.

    PubMed

    Allen, Sam J; King, Mark A; Yeadon, M R Fred

    2016-04-01

    The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  19. H2/H∞ control for grid-feeding converter considering system uncertainty

    NASA Astrophysics Data System (ADS)

    Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang

    2017-05-01

    Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.

  20. On the performance of energy detection-based CR with SC diversity over IG channel

    NASA Astrophysics Data System (ADS)

    Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka

    2017-12-01

    Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.

  1. Identification of the contribution of contact and aerial biomechanical parameters in acrobatic performance

    PubMed Central

    Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël

    2017-01-01

    Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954

  2. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  3. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  4. The development of response surface pathway design to reduce animal numbers in toxicity studies

    PubMed Central

    2014-01-01

    Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560

  5. Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security

    NASA Astrophysics Data System (ADS)

    Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver

    This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.

  6. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  7. An illustration of new methods in machine condition monitoring, Part I: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.

    2017-05-01

    There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.

  8. Advanced data management for optimising the operation of a full-scale WWTP.

    PubMed

    Beltrán, Sergio; Maiza, Mikel; de la Sota, Alejandro; Villanueva, José María; Ayesa, Eduardo

    2012-01-01

    The lack of appropriate data management tools is presently a limiting factor for a broader implementation and a more efficient use of sensors and analysers, monitoring systems and process controllers in wastewater treatment plants (WWTPs). This paper presents a technical solution for advanced data management of a full-scale WWTP. The solution is based on an efficient and intelligent use of the plant data by a standard centralisation of the heterogeneous data acquired from different sources, effective data processing to extract adequate information, and a straightforward connection to other emerging tools focused on the operational optimisation of the plant such as advanced monitoring and control or dynamic simulators. A pilot study of the advanced data manager tool was designed and implemented in the Galindo-Bilbao WWTP. The results of the pilot study showed its potential for agile and intelligent plant data management by generating new enriched information combining data from different plant sources, facilitating the connection of operational support systems, and developing automatic plots and trends of simulated results and actual data for plant performance and diagnosis.

  9. Influence of the Size of Cohorts in Adaptive Design for Nonlinear Mixed Effects Models: An Evaluation by Simulation for a Pharmacokinetic and Pharmacodynamic Model for a Biomarker in Oncology

    PubMed Central

    Lestini, Giulia; Dumont, Cyrielle; Mentré, France

    2015-01-01

    Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680

  10. Influence of the Size of Cohorts in Adaptive Design for Nonlinear Mixed Effects Models: An Evaluation by Simulation for a Pharmacokinetic and Pharmacodynamic Model for a Biomarker in Oncology.

    PubMed

    Lestini, Giulia; Dumont, Cyrielle; Mentré, France

    2015-10-01

    In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e., when no adaptation is performed, using wrong prior parameters. We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Estimation results of two-stage ADs and ξ * were close and much better than those obtained with ξ 0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three- and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement.

  11. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  12. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  13. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  14. Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.

    PubMed

    Vanhooren, H; Yuan, Z; Vanrolleghem, P A

    2002-01-01

    We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.

  15. Virtual tryout planning in automotive industry based on simulation metamodels

    NASA Astrophysics Data System (ADS)

    Harsch, D.; Heingärtner, J.; Hortig, D.; Hora, P.

    2016-11-01

    Deep drawn sheet metal parts are increasingly designed to the feasibility limit, thus achieving a robust manufacturing is often challenging. The fluctuation of process and material properties often lead to robustness problems. Therefore, numerical simulations are used to detect the critical regions. To enhance the agreement with the real process conditions, the material data are acquired through a variety of experiments. Furthermore, the force distribution is taken into account. The simulation metamodel contains the virtual knowledge of a particular forming process, which is determined based on a series of finite element simulations with variable input parameters. Based on the metamodels, virtual process windows can be displayed for different configurations. This helps to improve the operating point as well as to adjust process settings in case the process becomes unstable. Furthermore, the time of tool tryout can be shortened due to transfer of the virtual knowledge contained in the metamodels on the optimisation of the drawbeads. This allows the tool manufacturer to focus on the essential, to save time and to recognize complex relationships.

  16. Tanlock loop noise reduction using an optimised phase detector

    NASA Astrophysics Data System (ADS)

    Al-kharji Al-Ali, Omar; Anani, Nader; Al-Qutayri, Mahmoud; Al-Araji, Saleh

    2013-06-01

    This article proposes a time-delay digital tanlock loop (TDTL), which uses a new phase detector (PD) design that is optimised for noise reduction making it amenable for applications that require wide lock range without sacrificing the level of noise immunity. The proposed system uses an improved phase detector design which uses two phase detectors; one PD is used to optimise the noise immunity whilst the other is used to control the acquisition time of the TDTL system. Using the modified phase detector it is possible to reduce the second- and higher-order harmonics by at least 50% compared with the conventional TDTL system. The proposed system was simulated and tested using MATLAB/Simulink using frequency step inputs and inputs corrupted with varying levels of harmonic distortion. A hardware prototype of the system was implemented using a field programmable gate array (FPGA). The practical and simulation results indicate considerable improvement in the noise performance of the proposed system over the conventional TDTL architecture.

  17. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  18. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  19. A novel specimen-specific methodology to optimise the alignment of long bones for experimental testing.

    PubMed

    Cheong, Vee San; Bull, Anthony M J

    2015-12-16

    The choice of coordinate system and alignment of bone will affect the quantification of mechanical properties obtained during in-vitro biomechanical testing. Where these are used in predictive models, such as finite element analysis, the fidelic description of these properties is paramount. Currently in bending and torsional tests, bones are aligned on a pre-defined fixed span based on the reference system marked out. However, large inter-specimen differences have been reported. This suggests a need for the development of a specimen-specific alignment system for use in experimental work. Eleven ovine tibiae were used in this study and three-dimensional surface meshes were constructed from micro-Computed Tomography scan images. A novel, semi-automated algorithm was developed and applied to the surface meshes to align the whole bone based on its calculated principal directions. Thereafter, the code isolates the optimised location and length of each bone for experimental testing. This resulted in a lowering of the second moment of area about the chosen bending axis in the central region. More importantly, the optimisation method decreases the irregularity of the shape of the cross-sectional slices as the unbiased estimate of the population coefficient of variation of the second moment of area decreased from a range of (0.210-0.435) to (0.145-0.317) in the longitudinal direction, indicating a minimisation of the product moment, which causes eccentric loading. Thus, this methodology serves as an important pre-step to align the bone for mechanical tests or simulation work, is optimised for each specimen, ensures repeatability, and is general enough to be applied to any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  1. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  2. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    NASA Astrophysics Data System (ADS)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  3. A novel swarm intelligence algorithm for finding DNA motifs.

    PubMed

    Lei, Chengwei; Ruan, Jianhua

    2009-01-01

    Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.

  4. Application of the adjoint optimisation of shock control bump for ONERA-M6 wing

    NASA Astrophysics Data System (ADS)

    Nejati, A.; Mazaheri, K.

    2017-11-01

    This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.

  5. Simulation and Optimization of an Astrophotonic Reformatter

    NASA Astrophysics Data System (ADS)

    Anagnos, Th; Harris, R. J.; Corrigan, M. K.; Reeves, A. P.; Townson, M. J.; MacLachlan, D. G.; Thomson, R. R.; Morris, T. J.; Schwab, C.; Quirrenbach, A.

    2018-05-01

    Image slicing is a powerful technique in astronomy. It allows the instrument designer to reduce the slit width of the spectrograph, increasing spectral resolving power whilst retaining throughput. Conventionally this is done using bulk optics, such as mirrors and prisms, however more recently astrophotonic components known as PLs and photonic reformatters have also been used. These devices reformat the MM input light from a telescope into SM outputs, which can then be re-arranged to suit the spectrograph. The PD is one such device, designed to reduce the dependence of spectrograph size on telescope aperture and eliminate modal noise. We simulate the PD, by optimising the throughput and geometrical design using Soapy and BeamProp. The simulated device shows a transmission between 8 and 20 %, depending upon the type of AO correction applied, matching the experimental results well. We also investigate our idealised model of the PD and show that the barycentre of the slit varies only slightly with time, meaning that the modal noise contribution is very low when compared to conventional fibre systems. We further optimise our model device for both higher throughput and reduced modal noise. This device improves throughput by 6.4 % and reduces the movement of the slit output by 50%, further improving stability. This shows the importance of properly simulating such devices, including atmospheric effects. Our work complements recent work in the field and is essential for optimising future photonic reformatters.

  6. VLSI Technology for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  7. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  8. Analysis of the car body stability performance after coupler jack-knifing during braking

    NASA Astrophysics Data System (ADS)

    Guo, Lirong; Wang, Kaiyun; Chen, Zaigang; Shi, Zhiyong; Lv, Kaikai; Ji, Tiancheng

    2018-06-01

    This paper aims to improve car body stability performance by optimising locomotive parameters when coupler jack-knifing occurs during braking. In order to prevent car body instability behaviour caused by coupler jack-knifing, a multi-locomotive simulation model and a series of field braking tests are developed to analyse the influence of the secondary suspension and the secondary lateral stopper on the car body stability performance during braking. According to simulation and test results, increasing secondary lateral stiffness contributes to limit car body yaw angle during braking. However, it seriously affects the dynamic performance of the locomotive. For the secondary lateral stopper, its lateral stiffness and free clearance have a significant influence on improving the car body stability capacity, and have less effect on the dynamic performance of the locomotive. An optimised measure was proposed and adopted on the test locomotive. For the optimised locomotive, the lateral stiffness of secondary lateral stopper is increased to 7875 kN/m, while its free clearance is decreased to 10 mm. The optimised locomotive has excellent dynamic and safety performance. Comparing with the original locomotive, the maximum car body yaw angle and coupler rotation angle of the optimised locomotive were reduced by 59.25% and 53.19%, respectively, according to the practical application. The maximum derailment coefficient was 0.32, and the maximum wheelset lateral force was 39.5 kN. Hence, reasonable parameters of secondary lateral stopper can improve the car body stability capacity and the running safety of the heavy haul locomotive.

  9. Extending the FairRoot framework to allow for simulation and reconstruction of free streaming data

    NASA Astrophysics Data System (ADS)

    Al-Turany, M.; Klein, D.; Manafov, A.; Rybalchenko, A.; Uhlig, F.

    2014-06-01

    The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called "samplers" can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.

  10. Design of a compact antenna with flared groundplane for a wearable breast hyperthermia system.

    PubMed

    Curto, Sergio; Prakash, Punit

    2015-01-01

    Currently available microwave hyperthermia systems for breast cancer treatment do not conform to the intact breast and provide limited control of heating patterns, thereby hindering an effective treatment. A compact patch antenna with a flared groundplane that may be integrated within a wearable hyperthermia system for the treatment of the intact breast disease is proposed. A 3D simulation-based approach was employed to optimise the antenna design with the objective of maximising the hyperthermia treatment volume (41 °C iso-therm) while maintaining good impedance matching. The optimised antenna design was fabricated and experimentally evaluated with ex vivo tissue measurements. The optimised compact antenna yielded a -10 dB bandwidth of 90 MHz centred at 915 MHz, and was capable of creating hyperthermia treatment volumes up to 14.4 cm(3) (31 mm × 28 mm × 32 mm) with an input power of 15 W. Experimentally measured reflection coefficient and transient temperature profiles were in good agreement with simulated profiles. Variations of + 50% in blood perfusion yielded variations in the treatment volume up to 11.5%. When compared to an antenna with a similar patch element employing a conventional rectangular groundplane, the antenna with flared groundplane afforded 22.3% reduction in required power levels to reach the same temperature, and yielded 2.4 times larger treatment volumes. The proposed patch antenna with a flared groundplane may be integrated within a wearable applicator for hyperthermia treatment of intact breast targets and has the potential to improve efficiency, increase patient comfort, and ultimately clinical outcomes.

  11. A methodology for the optimisation of a mm-wave scanner

    NASA Astrophysics Data System (ADS)

    Stec, L. Zoë; Podd, Frank J. W.; Peyton, Anthony J.

    2016-10-01

    The need to detect non-metallic items under clothes to prevent terrorism at transport hubs is becoming vital. Millimetre wave technology is able to penetrate clothing, yet able to interact with objects concealed underneath. This paper considers active illumination using multiple transmitter and receiver antennas. The positioning of these antennas must achieve full body coverage, whilst minimising the number of antenna elements and the number of required measurements. It sets out a rapid simulation methodology, based on the Kirchhoff equations, to explore different scenarios for scanner architecture optimisation. The paper assumes that the electromagnetic waves used are at lower frequencies (say, 10-30 GHz) where the body temperature does not need to be considered. This range allows better penetration of clothing than higher frequencies, yet still provides adequate resolution. Since passengers vary greatly in shape and size, the system needs to be able to work well with a range of body morphologies. Thus we have used two very differently shaped avatars to test the portal simulations. This simulation tool allows many different avatars to be generated quickly. Findings from these simulations indicated that the dimensions of the avatar did indeed have an effect on the pattern of illumination, and that the data for each antenna pair can easily be combined to compare different antenna geometries for a given portal architecture, resulting in useful insights into antenna placement. The data generated could be analysed both quantitatively and qualitatively, at various levels of scale.

  12. SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel

    NASA Astrophysics Data System (ADS)

    Nolle, Lars

    The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.

  13. Comparison of simulation and experimental results for a gas puff nozzle on Ambiorix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnier, J-N.; Chevalier, J-M.; Dubroca, B.

    One of source term of Z-Pinch experiments is the gas puff density profile. In order to characterize the gas jet, an experiment based on interferometry has been performed. The first study was a point measurement (a section density profile) which led us to develop a global and instantaneous interferometry imaging method. In order to optimise the nozzle, we simulated the experiment with a flow calculation code (ARES). In this paper, the experimental results are compared with simulations. The different gas properties (He, Ne, Ar) and the flow duration lead us to take care, on the one hand, of the gasmore » viscosity, and on the other, of modifying the code for an instationary flow.« less

  14. Assessing the spatial impact of climate on wheat productivity and the potential value of climate forecasts at a regional level

    NASA Astrophysics Data System (ADS)

    Wang, Enli; Xu, J.; Jiang, Q.; Austin, J.

    2009-03-01

    Quantification of the spatial impact of climate on crop productivity and the potential value of seasonal climate forecasts can effectively assist the strategic planning of crop layout and help to understand to what extent climate risk can be managed through responsive management strategies at a regional level. A simulation study was carried out to assess the climate impact on the performance of a dryland wheat-fallow system and the potential value of seasonal climate forecasts in nitrogen management in the Murray-Darling Basin (MDB) of Australia. Daily climate data (1889-2002) from 57 stations were used with the agricultural systems simulator (APSIM) to simulate wheat productivity and nitrogen requirement as affected by climate. On a good soil, simulated grain yield ranged from <2 t/ha in west inland to >7 t/ha in the east border regions. Optimal nitrogen rates ranged from <60 kgN/ha/yr to >200 kgN/ha/yr. Simulated gross margin was in the range of -20/ha to 700/ha, increasing eastwards. Wheat yield was closely related to rainfall in the growing season and the stored soil moisture at sowing time. The impact of stored soil moisture increased from southwest to northeast. Simulated annual deep drainage ranged from zero in western inland to >200 mm in the east. Nitrogen management, optimised based on ‘perfect’ knowledge of daily weather in the coming season, could add value of 26˜79/ha compared to management optimised based on historical climate, with the maximum occurring in central to western part of MDB. It would also reduce the nitrogen application by 5˜25 kgN/ha in the main cropping areas. Comparison of simulation results with the current land use mapping in MDB revealed that the western boundary of the current cropping zone approximated the isolines of 160 mm of growing season rainfall, 2.5t/ha of wheat grain yield, and 150/ha of gross margin in QLD and NSW. In VIC and SA, the 160-mm isohyets corresponded relatively lower simulated yield due to less stored soil water. Impacts of other factors like soil types were also discussed.

  15. The solution of target assignment problem in command and control decision-making behaviour simulation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Huai, Wenqing; Wang, Shaodan

    2017-08-01

    C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.

  16. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  17. The seasonal behaviour of carbon fluxes in the Amazon: fusion of FLUXNET data and the ORCHIDEE model

    NASA Astrophysics Data System (ADS)

    Verbeeck, H.; Peylin, P.; Bacour, C.; Ciais, P.

    2009-04-01

    Eddy covariance measurements at the Santarém (km 67) site revealed an unexpected seasonal pattern in carbon fluxes which could not be simulated by existing state-of-the-art global ecosystem models (Saleska et al., Sciece 2003). An unexpected high carbon uptake was measured during dry season. In contrast, carbon release was observed in the wet season. There are several possible (combined) underlying mechanisms of this phenomenon: (1) an increased soil respiration due to soil moisture in the wet season, (2) increased photosynthesis during the dry season due to deep rooting, hydraulic lift, increased radiation and/or a leaf flush. The objective of this study is to optimise the ORCHIDEE model using eddy covariance data in order to be able to mimic the seasonal response of carbon fluxes to dry/wet conditions in tropical forest ecosystems. By doing this, we try to identify the underlying mechanisms of this seasonal response. The ORCHIDEE model is a state of the art mechanistic global vegetation model that can be run at local or global scale. It calculates the carbon and water cycle in the different soil and vegetation pools and resolves the diurnal cycle of fluxes. ORCHIDEE is built on the concept of plant functional types (PFT) to describe vegetation. To bring the different carbon pool sizes to realistic values, spin-up runs are used. ORCHIDEE uses climate variables as drivers together with a number of ecosystem parameters that have been assessed from laboratory and in situ experiments. These parameters are still associated with a large uncertainty and may vary between and within PFTs in a way that is currently not informed or captured by the model. Recently, the development of assimilation techniques allows the objective use of eddy covariance data to improve our knowledge of these parameters in a statistically coherent approach. We use a Bayesian optimisation approach. This approach is based on the minimization of a cost function containing the mismatch between simulated model output and observations as well as the mismatch between a priori and optimized parameters. The parameters can be optimized on different time scales (annually, monthly, daily). For this study the model is optimised at local scale for 5 eddy flux sites: 4 sites in Brazil and one in French Guyana. The seasonal behaviour of C fluxes in response to wet and dry conditions differs among these sites. Key processes that are optimised include: the effect of the soil water on heterotrophic soil respiration, the effect of soil water availability on stomatal conductance and photosynthesis, and phenology. By optimising several key parameters we could improve the simulation of the seasonal pattern of NEE significantly. Nevertheless, posterior parameters should be interpreted with care, because resulting parameter values might compensate for uncertainties on the model structure or other parameters. Moreover, several critical issues appeared during this study e.g. how to assimilate latent and sensible heat data, when the energy balance is not closed in the data? Optimisation of the Q10 parameter showed that on some sites respiration was not sensitive at all to temperature, which show only small variations in this region. Considering this, one could question the reliability of the partitioned fluxes (GPP/Reco) at these sites. This study also tests if there is coherence between optimised parameter values of different sites within the tropical forest PFT and if the forward model response to climate variations is similar between sites.

  18. Simulation studies promote technological development of radiofrequency phased array hyperthermia.

    PubMed

    Wust, P; Seebass, M; Nadobny, J; Deuflhard, P; Mönich, G; Felix, R

    1996-01-01

    A treatment planning program package for radiofrequency hyperthermia has been developed. It consists of software modules for processing three-dimensional computerized tomography (CT) data sets, manual segmentation, generation of tetrahedral grids, numerical calculation and optimisation of three-dimensional E field distributions using a volume surface integral equation algorithm as well as temperature distributions using an adaptive multilevel finite-elements code, and graphical tools for simultaneous representation of CT data and simulation results. Heat treatments are limited by hot spots in healthy tissues caused by E field maxima at electrical interfaces (bone/muscle). In order to reduce or avoid hot spots suitable objective functions are derived from power deposition patterns and temperature distributions, and are utilised to optimise antenna parameters (phases, amplitudes). The simulation and optimisation tools have been applied to estimate the improvements that could be reached by upgrades of the clinically used SIGMA-60 applicator (consisting of a single ring of four antenna pairs). The investigated upgrades are increased number of antennas and channels (triple-ring of 3 x 8 antennas and variation of antenna inclination. Significant improvement of index temperatures (1-2 degrees C) is achieved by upgrading the single ring to a triple ring with free phase selection for every antenna or antenna pair. Antenna amplitudes and inclinations proved as less important parameters.

  19. Comparing approaches for using climate projections in assessing water resources investments for systems with multiple stakeholder groups

    NASA Astrophysics Data System (ADS)

    Hurford, Anthony; Harou, Julien

    2015-04-01

    Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.

  20. Optimisation of the imaging and dosimetric characteristics of an electronic portal imaging device employing plastic scintillating fibres using Monte Carlo simulations.

    PubMed

    Blake, S J; McNamara, A L; Vial, P; Holloway, L; Kuncic, Z

    2014-11-21

    A Monte Carlo model of a novel electronic portal imaging device (EPID) has been developed using Geant4 and its performance for imaging and dosimetry applications in radiotherapy has been characterised. The EPID geometry is based on a physical prototype under ongoing investigation and comprises an array of plastic scintillating fibres in place of the metal plate/phosphor screen in standard EPIDs. Geometrical and optical transport parameters were varied to investigate their impact on imaging and dosimetry performance. Detection efficiency was most sensitive to variations in fibre length, achieving a peak value of 36% at 50 mm using 400 keV x-rays for the lengths considered. Increases in efficiency for longer fibres were partially offset by reductions in sensitivity. Removing the extra-mural absorber surrounding individual fibres severely decreased the modulation transfer function (MTF), highlighting its importance in maximising spatial resolution. Field size response and relative dose profile simulations demonstrated a water-equivalent dose response and thus the prototype's suitability for dosimetry applications. Element-to-element mismatch between scintillating fibres and underlying photodiode pixels resulted in a reduced MTF for high spatial frequencies and quasi-periodic variations in dose profile response. This effect is eliminated when fibres are precisely matched to underlying pixels. Simulations strongly suggest that with further optimisation, this prototype EPID may be capable of simultaneous imaging and dosimetry in radiotherapy.

  1. Accelerating clinical development of HIV vaccine strategies: methodological challenges and considerations in constructing an optimised multi-arm phase I/II trial design.

    PubMed

    Richert, Laura; Doussau, Adélaïde; Lelièvre, Jean-Daniel; Arnold, Vincent; Rieux, Véronique; Bouakane, Amel; Lévy, Yves; Chêne, Geneviève; Thiébaut, Rodolphe

    2014-02-26

    Many candidate vaccine strategies against human immunodeficiency virus (HIV) infection are under study, but their clinical development is lengthy and iterative. To accelerate HIV vaccine development optimised trial designs are needed. We propose a randomised multi-arm phase I/II design for early stage development of several vaccine strategies, aiming at rapidly discarding those that are unsafe or non-immunogenic. We explored early stage designs to evaluate both the safety and the immunogenicity of four heterologous prime-boost HIV vaccine strategies in parallel. One of the vaccines used as a prime and boost in the different strategies (vaccine 1) has yet to be tested in humans, thus requiring a phase I safety evaluation. However, its toxicity risk is considered minimal based on data from similar vaccines. We newly adapted a randomised phase II trial by integrating an early safety decision rule, emulating that of a phase I study. We evaluated the operating characteristics of the proposed design in simulation studies with either a fixed-sample frequentist or a continuous Bayesian safety decision rule and projected timelines for the trial. We propose a randomised four-arm phase I/II design with two independent binary endpoints for safety and immunogenicity. Immunogenicity evaluation at trial end is based on a single-stage Fleming design per arm, comparing the observed proportion of responders in an immunogenicity screening assay to an unacceptably low proportion, without direct comparisons between arms. Randomisation limits heterogeneity in volunteer characteristics between arms. To avoid exposure of additional participants to an unsafe vaccine during the vaccine boost phase, an early safety decision rule is imposed on the arm starting with vaccine 1 injections. In simulations of the design with either decision rule, the risks of erroneous conclusions were controlled <15%. Flexibility in trial conduct is greater with the continuous Bayesian rule. A 12-month gain in timelines is expected by this optimised design. Other existing designs such as bivariate or seamless phase I/II designs did not offer a clear-cut alternative. By combining phase I and phase II evaluations in a multi-arm trial, the proposed optimised design allows for accelerating early stage clinical development of HIV vaccine strategies.

  2. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  3. SIMS(DAIRY): a modelling framework to identify sustainable dairy farms in the UK. Framework description and test for organic systems and N fertiliser optimisation.

    PubMed

    Del Prado, A; Misselbrook, T; Chadwick, D; Hopkins, A; Dewhurst, R J; Davison, P; Butler, A; Schröder, J; Scholefield, D

    2011-09-01

    Multiple demands are placed on farming systems today. Society, national legislation and market forces seek what could be seen as conflicting outcomes from our agricultural systems, e.g. food quality, affordable prices, a healthy environmental, consideration of animal welfare, biodiversity etc., Many of these demands, or desirable outcomes, are interrelated, so reaching one goal may often compromise another and, importantly, pose a risk to the economic viability of the farm. SIMS(DAIRY), a farm-scale model, was used to explore this complexity for dairy farm systems. SIMS(DAIRY) integrates existing approaches to simulate the effect of interactions between farm management, climate and soil characteristics on losses of nitrogen, phosphorus and carbon. The effects on farm profitability and attributes of biodiversity, milk quality, soil quality and animal welfare are also included. SIMS(DAIRY) can also be used to optimise fertiliser N. In this paper we discuss some limitations and strengths of using SIMS(DAIRY) compared to other modelling approaches and propose some potential improvements. Using the model we evaluated the sustainability of organic dairy systems compared with conventional dairy farms under non-optimised and optimised fertiliser N use. Model outputs showed for example, that organic dairy systems based on grass-clover swards and maize silage resulted in much smaller total GHG emissions per l of milk and slightly smaller losses of NO(3) leaching and NO(x) emissions per l of milk compared with the grassland/maize-based conventional systems. These differences were essentially because the conventional systems rely on indirect energy use for 'fixing' N compared with biological N fixation for the organic systems. SIMS(DAIRY) runs also showed some other potential benefits from the organic systems compared with conventional systems in terms of financial performance and soil quality and biodiversity scores. Optimisation of fertiliser N timings and rates showed a considerable scope to reduce the (GHG emissions per l milk too). Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Study on the extrusion of nickel-based spark plug electrodes by numerical simulation

    NASA Astrophysics Data System (ADS)

    Saby, Q.; Courbon, C.; Salvatore, F.; Fabre, D.; Romeyer, F.

    2018-05-01

    Interest in metal forming simulation has grown rapidly during the last decades and is now well established even in industry. It provides a flexible and relatively cheap method to perform sensitivity analyses, getting a better insight into the forming process and use it as an optimisation tool. As far as wear is concerned, numerical simulation can be seen as a relevant approach to assess the thermomechanical loadings applied to the active die surface and therefore predict their wear behaviour. In this study, a Finite-Element (FE) based model has been developed in order to investigate the cold forming process of a nickel-based sparkplug electrode. A fully thermo-mechanically coupled implicit formulation has been used in order to model the forward extrusion step with a special emphasis on the contact conditions at the workpiece-die interface. Contact pressure, relative sliding velocity and temperature profiles have been extracted versus time and qualitatively compared to the wear phenomena observed on the worn production dies.

  5. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  6. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  7. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  8. A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges

    PubMed Central

    Asgari, B.; Osman, S. A.; Adnan, A.

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400

  9. A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.

    PubMed

    Asgari, B; Osman, S A; Adnan, A

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.

  10. An improved PSO-SVM model for online recognition defects in eddy current testing

    NASA Astrophysics Data System (ADS)

    Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin

    2013-12-01

    Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.

  11. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  12. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  13. Optimisation of warpage on thin shell part by using particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Norshahira, R.; Shayfull, Z.; Nasir, S. M.; Saad, S. M. Sazli; Fathullah, M.

    2017-09-01

    As the product nowadays moving towards thinner design, causing the production of the plastic product facing a lot of difficulties. This is due to the higher possibilities of defects occur as the thickness of the wall gets thinner. Demand for technique in reducing the defects increasing due to this factor. These defects has seen to be occur due to several factors in injection moulding process. In the study a Moldflow software was used in simulating the injection moulding process. While RSM is used in producing the mathematical model to be used as the input fitness function for the Matlab software. Particle Swarm Optimisation (PSO) technique is used in optimising the processing condition to reduce the amount of shrinkage and warpage of the plastic part. The results shows that there are a warpage reduction of 17.60% in x direction, 18.15% in y direction and 10.25% reduction in z direction respectively. The results shows the reliability of this artificial method in minimising the product warpage.

  14. Natural Erosion of Sandstone as Shape Optimisation.

    PubMed

    Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan

    2017-12-11

    Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.

  15. Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.

    PubMed

    Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha

    2015-01-01

    The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.

  16. A universal preconditioner for simulating condensed phase materials.

    PubMed

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  17. A universal preconditioner for simulating condensed phase materials

    NASA Astrophysics Data System (ADS)

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-01

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  18. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  19. A domain specific language for performance portable molecular dynamics algorithms

    NASA Astrophysics Data System (ADS)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  20. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  1. End-to-end System Performance Simulation: A Data-Centric Approach

    NASA Astrophysics Data System (ADS)

    Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier

    2013-08-01

    In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.

  2. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  3. Optimisation of process parameters on thin shell part using response surface methodology (RSM) and genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.

  4. A new web-based modelling tool (Websim-MILQ) aimed at optimisation of thermal treatments in the dairy industry.

    PubMed

    Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P

    2008-11-30

    In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.

  5. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  6. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  7. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  8. Power generation based on biomass by combined fermentation and gasification--a new concept derived from experiments and modelling.

    PubMed

    Methling, Torsten; Armbrust, Nina; Haitz, Thilo; Speidel, Michael; Poboss, Norman; Braun-Unkhoff, Marina; Dieter, Heiko; Kempter-Regel, Brigitte; Kraaij, Gerard; Schliessmann, Ursula; Sterr, Yasemin; Wörner, Antje; Hirth, Thomas; Riedel, Uwe; Scheffknecht, Günter

    2014-10-01

    A new concept is proposed for combined fermentation (two-stage high-load fermenter) and gasification (two-stage fluidised bed gasifier with CO2 separation) of sewage sludge and wood, and the subsequent utilisation of the biogenic gases in a hybrid power plant, consisting of a solid oxide fuel cell and a gas turbine. The development and optimisation of the important processes of the new concept (fermentation, gasification, utilisation) are reported in detail. For the gas production, process parameters were experimentally and numerically investigated to achieve high conversion rates of biomass. For the product gas utilisation, important combustion properties (laminar flame speed, ignition delay time) were analysed numerically to evaluate machinery operation (reliability, emissions). Furthermore, the coupling of the processes was numerically analysed and optimised by means of integration of heat and mass flows. The high, simulated electrical efficiency of 42% including the conversion of raw biomass is promising for future power generation by biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  10. Optimised analytical models of the dielectric properties of biological tissue.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin

    2017-05-01

    The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. A universal preconditioner for simulating condensed phase materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less

  12. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  13. Cultural-based particle swarm for dynamic optimisation problems

    NASA Astrophysics Data System (ADS)

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  14. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  15. Design and simulation of a semiconductor chip-based visible - NIR spectrometer for Earth observation

    NASA Astrophysics Data System (ADS)

    Coote, J.; Woolliams, E.; Fox, N.; Goodyer, I. D.; Sweeney, S. J.

    2014-03-01

    We present the development of a novel semiconductor chip-based spectrometer for calibration of Earth observation instruments. The chip follows the Solo spectroscopy approach utilising an array of microdisk resonators evanescently coupled to a central waveguide. Each resonator is tuned to select out a specific wavelength from the incoming spectrum, and forms a p-i-n junction in which current is generated when light of the correct wavelength is present. In this paper we discuss important design aspects including the choice of semiconductor material, design of semiconductor quantum well structures for optical absorption, and design and optimisation of the waveguide and resonators.

  16. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  17. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  18. Circuit-level optimisation of a:Si TFT-based AMOLED pixel circuits for maximum hold current

    NASA Astrophysics Data System (ADS)

    Foroughi, Aidin; Mehrpoo, Mohammadreza; Ashtiani, Shahin J.

    2013-11-01

    Design of AMOLED pixel circuits has manifold constraints and trade-offs which provides incentive for circuit designers to seek optimal solutions for different objectives. In this article, we present a discussion on the viability of an optimal solution to achieve the maximum hold current. A compact formula for component sizing in a conventional 2T1C pixel is, therefore, derived. Compared to SPICE simulation results, for several pixel sizes, our predicted optimum sizing yields maximum currents with errors less than 0.4%.

  19. 3D printing process of oxidized nanocellulose and gelatin scaffold.

    PubMed

    Xu, Xiaodong; Zhou, Jiping; Jiang, Yani; Zhang, Qi; Shi, Hongcan; Liu, Dongfang

    2018-08-01

    For tissue engineering applications tissue scaffolds need to have a porous structure to meet the needs of cell proliferation/differentiation, vascularisation and sufficient mechanical strength for the specific tissue. Here we report the results of a study of the 3D printing process for composite materials based on oxidized nanocellulose and gelatin, that was optimised through measuring rheological properties of different batches of materials after different crosslinking times, simulation of the pneumatic extrusion process and 3D scaffolds fabrication with Solidworks Flow Simulation, observation of its porous structure by SEM, measurement of pressure-pull performance, and experiments aimed at finding out the vitro cytotoxicity and cell morphology. The materials printed are highly porous scaffolds with good mechanical properties.

  20. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  1. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  2. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  3. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  4. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  5. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  6. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  7. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  8. On the use of PGD for optimal control applied to automated fibre placement

    NASA Astrophysics Data System (ADS)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  9. Wind energy resource modelling in Portugal and its future large-scale alteration due to anthropogenic induced climate changes =

    NASA Astrophysics Data System (ADS)

    Carvalho, David Joao da Silva

    The high dependence of Portugal from foreign energy sources (mainly fossil fuels), together with the international commitments assumed by Portugal and the national strategy in terms of energy policy, as well as resources sustainability and climate change issues, inevitably force Portugal to invest in its energetic self-sufficiency. The 20/20/20 Strategy defined by the European Union defines that in 2020 60% of the total electricity consumption must come from renewable energy sources. Wind energy is currently a major source of electricity generation in Portugal, producing about 23% of the national total electricity consumption in 2013. The National Energy Strategy 2020 (ENE2020), which aims to ensure the national compliance of the European Strategy 20/20/20, states that about half of this 60% target will be provided by wind energy. This work aims to implement and optimise a numerical weather prediction model in the simulation and modelling of the wind energy resource in Portugal, both in offshore and onshore areas. The numerical model optimisation consisted in the determination of which initial and boundary conditions and planetary boundary layer physical parameterizations options provide wind power flux (or energy density), wind speed and direction simulations closest to in situ measured wind data. Specifically for offshore areas, it is also intended to evaluate if the numerical model, once optimised, is able to produce power flux, wind speed and direction simulations more consistent with in situ measured data than wind measurements collected by satellites. This work also aims to study and analyse possible impacts that anthropogenic climate changes may have on the future wind energetic resource in Europe. The results show that the ECMWF reanalysis ERA-Interim are those that, among all the forcing databases currently available to drive numerical weather prediction models, allow wind power flux, wind speed and direction simulations more consistent with in situ wind measurements. It was also found that the Pleim-Xiu and ACM2 planetary boundary layer parameterizations are the ones that showed the best performance in terms of wind power flux, wind speed and direction simulations. This model optimisation allowed a significant reduction of the wind power flux, wind speed and direction simulations errors and, specifically for offshore areas, wind power flux, wind speed and direction simulations more consistent with in situ wind measurements than data obtained from satellites, which is a very valuable and interesting achievement. This work also revealed that future anthropogenic climate changes can negatively impact future European wind energy resource, due to tendencies towards a reduction in future wind speeds especially by the end of the current century and under stronger radiative forcing conditions.

  10. Transformer ratio saturation in a beam-driven wakefield accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J. P.; Martorelli, R.; Pukhov, A.

    We show that for beam-driven wakefield acceleration, the linearly ramped, equally spaced train of bunches typically considered to optimise the transformer ratio only works for flat-top bunches. Through theory and simulation, we explain that this behaviour is due to the unique properties of the plasma response to a flat-top density profile. Calculations of the optimal scaling for a train of Gaussian bunches show diminishing returns with increasing bunch number, tending towards saturation. For a periodic bunch train, a transformer ratio of 23 was achieved for 50 bunches, rising to 40 for a fully optimised beam.

  11. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    PubMed

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Monte carlo study of MOSFET packaging, optimised for improved energy response: single MOSFET filtration.

    PubMed

    Othman, M A R; Cutajar, D L; Hardcastle, N; Guatelli, S; Rosenfeld, A B

    2010-09-01

    Monte Carlo simulations of the energy response of a conventionally packaged single metal-oxide field effect transistors (MOSFET) detector were performed with the goal of improving MOSFET energy dependence for personal accident or military dosimetry. The MOSFET detector packaging was optimised. Two different 'drop-in' design packages for a single MOSFET detector were modelled and optimised using the GEANT4 Monte Carlo toolkit. Absorbed photon dose simulations of the MOSFET dosemeter placed in free-air response, corresponding to the absorbed doses at depths of 0.07 mm (D(w)(0.07)) and 10 mm (D(w)(10)) in a water equivalent phantom of size 30 x 30 x 30 cm(3) for photon energies of 0.015-2 MeV were performed. Energy dependence was reduced to within + or - 60 % for photon energies 0.06-2 MeV for both D(w)(0.07) and D(w)(10). Variations in the response for photon energies of 15-60 keV were 200 and 330 % for D(w)(0.07) and D(w)(10), respectively. The obtained energy dependence was reduced compared with that for conventionally packaged MOSFET detectors, which usually exhibit a 500-700 % over-response when used in free-air geometry.

  13. Miniature microwave applicator for murine bladder hyperthermia studies.

    PubMed

    Salahi, Sara; Maccarini, Paolo F; Rodrigues, Dario B; Etienne, Wiguins; Landon, Chelsea D; Inman, Brant A; Dewhirst, Mark W; Stauffer, Paul R

    2012-01-01

    Novel combinations of heat with chemotherapeutic agents are often studied in murine tumour models. Currently, no device exists to selectively heat small tumours at depth in mice. In this project we modelled, built and tested a miniature microwave heat applicator, the physical dimensions of which can be scaled to adjust the volume and depth of heating to focus on the tumour volume. Of particular interest is a device that can selectively heat murine bladder. Using Avizo(®) segmentation software, we created a numerical mouse model based on micro-MRI scan data. The model was imported into HFSS™ (Ansys) simulation software and parametric studies were performed to optimise the dimensions of a water-loaded circular waveguide for selective power deposition inside a 0.15 mL bladder. A working prototype was constructed operating at 2.45 GHz. Heating performance was characterised by mapping fibre-optic temperature sensors along catheters inserted at depths of 0-1 mm (subcutaneous), 2-3 mm (vaginal), and 4-5 mm (rectal) below the abdominal wall, with the mid depth catheter adjacent to the bladder. Core temperature was monitored orally. Thermal measurements confirm the simulations which demonstrate that this applicator can provide local heating at depth in small animals. Measured temperatures in murine pelvis show well-localised bladder heating to 42-43°C while maintaining normothermic skin and core temperatures. Simulation techniques facilitate the design optimisation of microwave antennas for use in pre-clinical applications such as localised tumour heating in small animals. Laboratory measurements demonstrate the effectiveness of a new miniature water-coupled microwave applicator for localised heating of murine bladder.

  14. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  15. Dynamic least-cost optimisation of wastewater system remedial works requirements.

    PubMed

    Vojinovic, Z; Solomatine, D; Price, R K

    2006-01-01

    In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.

  16. On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish

    2016-04-01

    A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.

  17. Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon

    2018-02-01

    Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.

  18. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  19. Optimisation of synergistic biomass-degrading enzyme systems for efficient rice straw hydrolysis using an experimental mixture design.

    PubMed

    Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat

    2012-09-01

    Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  1. Fractionation of wastewater characteristics for modelling of Firle Sewage Treatment Works, Harare, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Muserere, Simon Takawira; Hoko, Zvikomborero; Nhapi, Innocent

    Varying conditions are required for different species of microorganisms for the complex biological processes taking place within the activated sludge treatment system. It is against the requirement to manage this complex dynamic system that computer simulators were developed to aid in optimising activated sludge treatment processes. These computer simulators require calibration with quality data input that include wastewater fractionation among others. Thus, this research fractionated raw sewage, at Firle Sewage Treatment Works (STW), for calibration of the BioWin simulation model. Firle STW is a 3-stage activated sludge system. Wastewater characteristics of importance for activated sludge process design can be grouped into carbonaceous, nitrogenous and phosphorus compounds. Division of the substrates and compounds into their constituent fractions is called fractionation and is a valuable tool for process assessment. Fractionation can be carried out using bioassay methods or much simpler physico-chemical methods. The bioassay methods require considerable experience with experimental activated sludge systems and associated measurement techniques while the physico-chemical methods are straight forward. Plant raw wastewater fractionation was carried out through two 14-day campaign periods, the first being from 3 to 16 July 2013 and the second was from 1 to 14 October 2013. According to the Zimbabwean Environmental Management Act, and based on the sensitivity of its catchment, Firle STW effluent discharge regulatory standards in mg/L are COD (<60), TN (<10), ammonia (<0.2), and TP (<1). On the other hand Firle STW Unit 4 effluent quality results based on City of Harare records in mg/L during the period of study were COD (90 ± 35), TN (9.0 ± 3.0), ammonia (0.2 ± 0.4) and TP (3.0 ± 1.0). The raw sewage parameter concentrations measured during the study in mg/L and fractions for raw sewage respectively were as follows total COD (680 ± 37), slowly biodegradable COD (456 ± 23), (0.7), readily biodegradable COD (131 ± 11), (0.2), soluble unbiodegradable COD (40 ± 3), (0.06), particulate unbiodegradable COD (53 ± 3) (0.08), total TKN (40 ± 4) mg/L, ammonia (28 ± 6), (0.68), organically bound nitrogen (12 ± 2), (0.32), TP (15 ± 1.4), orthophosphates (9.6 ± 1.4), (0.64), and organically bound TP (5.4 ± 1.4), (0.36), soluble unbiodegradable TP (0.4 ± 0), (0.03), particulate unbiodegradable TP (0.05 ± 0), (0.003). Thus, wastewater at Firle STW was found to be highly biodegradable suggesting optimisation of biological nutrient removal process will generally achieve effluent regulatory standards compliance. Thus, opportunities for plant optimisation do exist of which modelling with the use of a simulator is recommended to achieve recommended effluent standards in addition to reduction of operating costs.

  2. Process Simulation of Aluminium Sheet Metal Deep Drawing at Elevated Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winklhofer, Johannes; Trattnig, Gernot; Lind, Christoph

    Lightweight design is essential for an economic and environmentally friendly vehicle. Aluminium sheet metal is well known for its ability to improve the strength to weight ratio of lightweight structures. One disadvantage of aluminium is that it is less formable than steel. Therefore complex part geometries can only be realized by expensive multi-step production processes. One method for overcoming this disadvantage is deep drawing at elevated temperatures. In this way the formability of aluminium sheet metal can be improved significantly, and the number of necessary production steps can thereby be reduced. This paper introduces deep drawing of aluminium sheet metalmore » at elevated temperatures, a corresponding simulation method, a characteristic process and its optimization. The temperature and strain rate dependent material properties of a 5xxx series alloy and their modelling are discussed. A three dimensional thermomechanically coupled finite element deep drawing simulation model and its validation are presented. Based on the validated simulation model an optimised process strategy regarding formability, time and cost is introduced.« less

  3. Escalated convergent artificial bee colony

    NASA Astrophysics Data System (ADS)

    Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu

    2016-03-01

    Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.

  4. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    NASA Astrophysics Data System (ADS)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  5. Explicit reference governor for linear systems

    NASA Astrophysics Data System (ADS)

    Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo

    2018-06-01

    The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.

  6. Challenges in simulating the human gut for understanding the role of the microbiota in obesity.

    PubMed

    Aguirre, M; Venema, K

    2017-02-07

    There is an elevated incidence of cases of obesity worldwide. Therefore, the development of strategies to tackle this condition is of vital importance. This review focuses on the necessity of optimising in vitro systems to model human colonic fermentation in obese subjects. This may allow to increase the resolution and the physiological relevance of the information obtained from this type of studies when evaluating the potential role that the human gut microbiota plays in obesity. In light of the parameters that are currently used for the in vitro simulation of the human gut (which are mostly based on information derived from healthy subjects) and the possible difference with an obese condition, we propose to revise and improve specific standard operating procedures.

  7. Analysis and design of high-power and efficient, millimeter-wave power amplifier systems using zero degree combiners

    NASA Astrophysics Data System (ADS)

    Tai, Wei; Abbasi, Mortez; Ricketts, David S.

    2018-01-01

    We present the analysis and design of high-power millimetre-wave power amplifier (PA) systems using zero-degree combiners (ZDCs). The methodology presented optimises the PA device sizing and the number of combined unit PAs based on device load pull simulations, driver power consumption analysis and loss analysis of the ZDC. Our analysis shows that an optimal number of N-way combined unit PAs leads to the highest power-added efficiency (PAE) for a given output power. To illustrate our design methodology, we designed a 1-W PA system at 45 GHz using a 45 nm silicon-on-insulator process and showed that an 8-way combined PA has the highest PAE that yields simulated output power of 30.6 dBm and 31% peak PAE.

  8. Control allocation-based adaptive control for greenhouse climate

    NASA Astrophysics Data System (ADS)

    Su, Yuanping; Xu, Lihong; Goodman, Erik D.

    2018-04-01

    This paper presents an adaptive approach to greenhouse climate control, as part of an integrated control and management system for greenhouse production. In this approach, an adaptive control algorithm is first derived to guarantee the asymptotic convergence of the closed system with uncertainty, then using that control algorithm, a controller is designed to satisfy the demands for heat and mass fluxes to maintain inside temperature, humidity and CO2 concentration at their desired values. Instead of applying the original adaptive control inputs directly, second, a control allocation technique is applied to distribute the demands of the heat and mass fluxes to the actuators by minimising tracking errors and energy consumption. To find an energy-saving solution, both single-objective optimisation (SOO) and multiobjective optimisation (MOO) in the control allocation structure are considered. The advantage of the proposed approach is that it does not require any a priori knowledge of the uncertainty bounds, and the simulation results illustrate the effectiveness of the proposed control scheme. It also indicates that MOO saves more energy in the control process.

  9. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  10. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  11. Simulation as a preoperative planning approach in advanced heart failure patients. A retrospective clinical analysis.

    PubMed

    Capoccia, Massimo; Marconi, Silvia; Singh, Sanjeet Avtaar; Pisanelli, Domenico M; De Lazzari, Claudio

    2018-05-02

    Modelling and simulation may become clinically applicable tools for detailed evaluation of the cardiovascular system and clinical decision-making to guide therapeutic intervention. Models based on pressure-volume relationship and zero-dimensional representation of the cardiovascular system may be a suitable choice given their simplicity and versatility. This approach has great potential for application in heart failure where the impact of left ventricular assist devices has played a significant role as a bridge to transplant and more recently as a long-term solution for non eligible candidates. We sought to investigate the value of simulation in the context of three heart failure patients with a view to predict or guide further management. CARDIOSIM © was the software used for this purpose. The study was based on retrospective analysis of haemodynamic data previously discussed at a multidisciplinary meeting. The outcome of the simulations addressed the value of a more quantitative approach in the clinical decision process. Although previous experience, co-morbidities and the risk of potentially fatal complications play a role in clinical decision-making, patient-specific modelling may become a daily approach for selection and optimisation of device-based treatment for heart failure patients. Willingness to adopt this integrated approach may be the key to further progress.

  12. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  13. Achieving optimal SERS through enhanced experimental design.

    PubMed

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  14. Emergency Management Operations Process Mapping: Public Safety Technical Program Study

    DTIC Science & Technology

    2011-02-01

    Enterprise Architectures in industry, and have been successfully applied to assist companies to optimise interdependencies and relationships between...model for more in-depth analysis of EM processes, and for use in tandem with other studies that apply modeling and simulation to assess EM...for use in tandem with other studies that apply modeling and simulation to assess EM operational effectiveness before and after changing elements

  15. Optimising probe holder design for sentinel lymph node imaging using clinical photoacoustic system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit

    2017-03-01

    Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.

  16. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  17. Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veselov, D A; Ayusheva, K R; Shashkin, I S

    2015-10-31

    We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)

  18. A water market simulator considering pair-wise trades between agents

    NASA Astrophysics Data System (ADS)

    Huskova, I.; Erfani, T.; Harou, J. J.

    2012-04-01

    In many basins in England no further water abstraction licences are available. Trading water between water rights holders has been recognized as a potentially effective and economically efficient strategy to mitigate increasing scarcity. A screening tool that could assess the potential for trade through realistic simulation of individual water rights holders would help assess the solution's potential contribution to local water management. We propose an optimisation-driven water market simulator that predicts pair-wise trade in a catchment and represents its interaction with natural hydrology and engineered infrastructure. A model is used to emulate licence-holders' willingness to engage in short-term trade transactions. In their simplest form agents are represented using an economic benefit function. The working hypothesis is that trading behaviour can be partially predicted based on differences in marginal values of water over space and time and estimates of transaction costs on pair-wise trades. We discuss the further possibility of embedding rules, norms and preferences of the different water user sectors to more realistically represent the behaviours, motives and constraints of individual licence holders. The potential benefits and limitations of such a social simulation (agent-based) approach is contrasted with our simulator where agents are driven by economic optimization. A case study based on the Dove River Basin (UK) demonstrates model inputs and outputs. The ability of the model to suggest impacts of water rights policy reforms on trading is discussed.

  19. Design and experimental analysis of a new malleovestibulopexy prosthesis using a finite element model of the human middle ear.

    PubMed

    Vallejo Valdezate, Luis A; Hidalgo Otamendi, Antonio; Hernández, Alberto; Lobo, Fernando; Gil-Carcedo Sañudo, Elisa; Gil-Carcedo García, Luis M

    2015-01-01

    Many designs of prostheses are available for middle ear surgery. In this study we propose a design for a new prosthesis, which optimises mechanical performance in the human middle ear and improves some deficiencies in the prostheses currently available. Our objective was to design and assess the theoretical acoustic-mechanical behaviour of this new total ossicular replacement prosthesis. The design of this new prosthesis was based on an animal model (an iguana). For the modelling and mechanical analysis of the new prosthesis, we used a dynamic 3D computer model of the human middle ear, based on the finite elements method (FEM). The new malleovestibulopexy prosthesis design demonstrates an acoustical-mechanical performance similar to that of the healthy human middle ear. This new design also has additional advantages, such as ease of implantation and stability in the middle ear. This study shows that computer simulation can be used to design and optimise the vibroacoustic characteristics of middle ear implants and demonstrates the effectiveness of a new malleovestibulopexy prosthesis in reconstructing the ossicular chain. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  20. Optimum design and operation of primary sludge fermentation schemes for volatile fatty acids production.

    PubMed

    Chanona, J; Ribes, J; Seco, A; Ferrer, J

    2006-01-01

    This paper presents a model-knowledge based algorithm for optimising the primary sludge fermentation process design and operation. This is a recently used method to obtain the volatile fatty acids (VFA), needed to improve biological nutrient removal processes, directly from the raw wastewater. The proposed algorithm consists in a heuristic reasoning algorithm based on the expert knowledge of the process. Only effluent VFA and the sludge blanket height (SBH) have to be set as design criteria, and the optimisation algorithm obtains the minimum return sludge and waste sludge flow rates which fulfil those design criteria. A pilot plant fed with municipal raw wastewater was operated in order to obtain experimental results supporting the developed algorithm groundwork. The experimental results indicate that when SBH was increased, higher solids retention time was obtained in the settler and VFA production increased. Higher recirculation flow-rates resulted in higher VFA production too. Finally, the developed algorithm has been tested by simulating different design conditions with very good results. It has been able to find the optimal operation conditions in all cases on which preset design conditions could be achieved. Furthermore, this is a general algorithm that can be applied to any fermentation-elutriation scheme with or without fermentation reactor.

  1. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Better powder diffractometers. II—Optimal choice of U, V and W

    NASA Astrophysics Data System (ADS)

    Cussen, L. D.

    2007-12-01

    This article presents a technique for optimising constant wavelength (CW) neutron powder diffractometers (NPDs) using conventional nonlinear least squares methods. This is believed to be the first such design optimisation for a neutron spectrometer. The validity of this approach and discussion should extend beyond the Gaussian element approximation used and also to instruments using different radiation, such as X-rays. This approach could later be extended to include vertical and perhaps horizontal focusing monochromators and probably other types of instruments such as three axis spectrometers. It is hoped that this approach will help in comparisons of CW and time-of-flight (TOF) instruments. Recent work showed that many different beam element combinations can give identical resolution on CW NPDs and presented a procedure to find these combinations and also find an "optimum" choice of detector collimation. Those results enable the previous redundancy in the description of instrument performance to be removed and permit a least squares optimisation of design. New inputs are needed and are identified as the sample plane spacing ( dS) of interest in the measurement. The optimisation requires a "quality factor", QPD, chosen here to be minimising the worst Bragg peak separation ability over some measurement range ( dS) while maintaining intensity. Any other QPD desired could be substituted. It is argued that high resolution and high intensity powder diffractometers (HRPDs and HIPDs) should have similar designs adjusted by a single scaling factor. Simulated comparisons are described suggesting significant improvements in performance for CW HIPDs. Optimisation with unchanged wavelength suggests improvements by factors of about 2 for HRPDs and 25 for HIPDs. A recently quantified design trade-off between the maximum line intensity possible and the degree of variation of angular resolution over the scattering angle range leads to efficiency gains at short wavelengths. This in turn leads in practice to another trade-off between this efficiency gain and losses at short wavelength due to technical effects. The exact gains from varying wavelength depend on the details of the short wavelength technical losses. Simulations suggest that the total potential PD performance gains may be very significant-factors of about 3 for HRPDs and more than 90 for HIPDs.

  3. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  4. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  5. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  6. Path optimisation of a mobile robot using an artificial neural network controller

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Parhi, D. R.

    2011-01-01

    This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  8. SU-F-T-184: 3D Range-Modulator for Scanned Particle Therapy: Development, Monte Carlo Simulations and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simeonov, Y; Penchev, P; Ringbaek, T Printz

    2016-06-15

    Purpose: Active raster scanning in particle therapy results in highly conformal dose distributions. Treatment time, however, is relatively high due to the large number of different iso-energy layers used. By using only one energy and the so called 3D range-modulator irradiation times of a few seconds only can be achieved, thus making delivery of homogeneous dose to moving targets (e.g. lung cancer) more reliable. Methods: A 3D range-modulator consisting of many pins with base area of 2.25 mm2 and different lengths was developed and manufactured with rapid prototyping technique. The form of the 3D range-modulator was optimised for a sphericalmore » target volume with 5 cm diameter placed at 25 cm in a water phantom. Monte Carlo simulations using the FLUKA package were carried out to evaluate the modulating effect of the 3D range-modulator and simulate the resulting dose distribution. The fine and complicated contour form of the 3D range-modulator was taken into account by a specially programmed user routine. Additionally FLUKA was extended with the capability of intensity modulated scanning. To verify the simulation results dose measurements were carried out at the Heidelberg Ion Therapy Center (HIT) with a 400.41 MeV 12C beam. Results: The high resolution measurements show that the 3D range-modulator is capable of producing homogeneous 3D conformal dose distributions, simultaneously reducing significantly irradiation time. Measured dose is in very good agreement with the previously conducted FLUKA simulations, where slight differences were traced back to minor manufacturing deviations from the perfect optimised form. Conclusion: Combined with the advantages of very short treatment time the 3D range-modulator could be an alternative to treat small to medium sized tumours (e.g. lung metastasis) with the same conformity as full raster-scanning treatment. Further simulations and measurements of more complex cases will be conducted to investigate the full potential of the 3D range-modulator.« less

  9. New Trends in Forging Technologies

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Hagen, T.; Knigge, J.; Elgaly, I.; Hadifi, T.; Bouguecha, A.

    2011-05-01

    Limited natural resources increase the demand on highly efficient machinery and transportation means. New energy-saving mobility concepts call for design optimisation through downsizing of components and choice of corrosion resistant materials possessing high strength to density ratios. Component downsizing can be performed either by constructive structural optimisation or by substituting heavy materials with lighter high-strength ones. In this context, forging plays an important role in manufacturing load-optimised structural components. At the Institute of Metal Forming and Metal-Forming Machines (IFUM) various innovative forging technologies have been developed. With regard to structural optimisation, different strategies for localised reinforcement of components were investigated. Locally induced strain hardening by means of cold forging under a superimposed hydrostatic pressure could be realised. In addition, controlled martensitic zones could be created through forming induced phase conversion in metastable austenitic steels. Other research focused on the replacement of heavy steel parts with high-strength nonferrous alloys or hybrid material compounds. Several forging processes of magnesium, aluminium and titanium alloys for different aeronautical and automotive applications were developed. The whole process chain from material characterisation via simulation-based process design to the production of the parts has been considered. The feasibility of forging complex shaped geometries using these alloys was confirmed. In spite of the difficulties encountered due to machine noise and high temperature, acoustic emission (AE) technique has been successfully applied for online monitoring of forging defects. New AE analysis algorithm has been developed, so that different signal patterns due to various events such as product/die cracking or die wear could be detected and classified. Further, the feasibility of the mentioned forging technologies was proven by means of the finite element analysis (FEA). For example, the integrity of forging dies with respect to crack initiation due to thermo-mechanical fatigue as well as the ductile damage of forgings was investigated with the help of cumulative damage models. In this paper some of the mentioned approaches are described.

  10. Real time control of a combined sewer system using radar-measured precipitation--results of the pilot study.

    PubMed

    Petruck, A; Holtmeier, E; Redder, A; Teichgräber, B

    2003-01-01

    Emschergenossenschaft and Lippeverband have developed a method to use radar-measured precipitation as an input for a real-time control of a combined sewer system containing several overflow structures. Two real-time control strategies have been developed and tested, one is solely volume-based, the other is volume and pollution-based. The system has been implemented in a pilot study in Gelsenkirchen, Germany. During the project the system was optimised and is now in constant operation. It was found, that the volume of combined sewage overflow could be reduced by 5 per cent per year. This was also found in simulations carried out in similar catchment areas. Most of the potential of improvement can already be achieved by local pollution-based control strategies.

  11. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.

    PubMed

    Waterfall, C M; Cobb, B D

    2001-12-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.

  12. Development of a Gas Dynamic and Thermodynamic Simulation Model of the Lontra Blade Compressor™

    NASA Astrophysics Data System (ADS)

    Karlovsky, Jerome

    2015-08-01

    The Lontra Blade Compressor™ is a patented double acting, internally compressing, positive displacement rotary compressor of innovative design. The Blade Compressor is in production for waste-water treatment, and will soon be launched for a range of applications at higher pressure ratios. In order to aid the design and development process, a thermodynamic and gas dynamic simulation program has been written in house. The software has been successfully used to optimise geometries and running conditions of current designs, and is also being used to evaluate future designs for different applications and markets. The simulation code has three main elements. A positive displacement chamber model, a leakage model and a gas dynamic model to simulate gas flow through ports and to track pressure waves in the inlet and outlet pipes. All three of these models are interlinked in order to track mass and energy flows within the system. A correlation study has been carried out to verify the software. The main correlation markers used were mass flow, chamber pressure, pressure wave tracking in the outlet pipe, and volumetric efficiency. It will be shown that excellent correlation has been achieved between measured and simulated data. Mass flow predictions were to within 2% of measured data, and the timings and magnitudes of all major gas dynamic effects were well replicated. The simulation will be further developed in the near future to help with the optimisation of exhaust and inlet silencers.

  13. Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.

    PubMed

    Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne

    2017-01-01

    Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.

  14. Development and optimization of a wildfire plume rise model based on remote sensing data inputs - Part 2

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Wooster, M.; Atherton, J.; Freitas, S. R.; Schultz, M. G.; Kaiser, J. W.

    2015-03-01

    Biomass burning is one of a relatively few natural processes that can inject globally significant quantities of gases and aerosols into the atmosphere at altitudes well above the planetary boundary layer, in some cases at heights in excess of 10 km. The "injection height" of biomass burning emissions is therefore an important parameter to understand when considering the characteristics of the smoke plumes emanating from landscape scale fires, and in particular when attempting to model their atmospheric transport. Here we further extend the formulations used within a popular 1D plume rise model, widely used for the estimation of landscape scale fire smoke plume injection height, and develop and optimise the model both so that it can run with an increased set of remotely sensed observations. The model is well suited for application in atmospheric Chemistry Transport Models (CTMs) aimed at understanding smoke plume downstream impacts, and whilst a number of wildfire emission inventories are available for use in such CTMs, few include information on plume injection height. Since CTM resolutions are typically too spatially coarse to capture the vertical transport induced by the heat released from landscape scale fires, approaches to estimate the emissions injection height are typically based on parametrizations. Our extensions of the existing 1D plume rise model takes into account the impact of atmospheric stability and latent heat on the plume up-draft, driving it with new information on active fire area and fire radiative power (FRP) retrieved from MODIS satellite Earth Observation (EO) data, alongside ECMWF atmospheric profile information. We extend the model by adding an equation for mass conservation and a new entrainment scheme, and optimise the values of the newly added parameters based on comparison to injection heights derived from smoke plume height retrievals made using the MISR EO sensor. Our parameter optimisation procedure is based on a twofold approach using sequentially a Simulating Annealing algorithm and a Markov chain Monte Carlo uncertainty test, and to try to ensure the appropriate convergence on suitable parameter values we use a training dataset consisting of only fires where a number of specific quality criteria are met, including local ambient wind shear limits derived from the ECMWF and MISR data, and "steady state" plumes and fires showing only relatively small changes between consecutive MODIS observations. Using our optimised plume rise model (PRMv2) with information from all MODIS-detected active fires detected in 2003 over North America, with outputs gridded to a 0.1° horizontal and 500 m vertical resolution mesh, we are able to derive wildfire injection height distributions whose maxima extend to the type of higher altitudes seen in actual observation-based wildfire plume datasets than are those derived either via the original plume model or any other parametrization tested herein. We also find our model to be the only one tested that more correctly simulates the very high plume (6 to 8 km a.s.l.), created by a large fire in Alberta (Canada) on the 17 August 2003, though even our approach does not reach the stratosphere as the real plume is expected to have done. Our results lead us to believe that our PRMv2 approach to modelling the injection height of wildfire plumes is a strong candidate for inclusion into CTMs aiming to represent this process, but we note that significant advances in the spatio-temporal resolutions of the data required to feed the model will also very likely bring key improvements in our ability to more accurately represent such phenomena, and that there remain challenges to the detailed validation of such simulations due to the relative sparseness of plume height observations and their currently rather limited temporal coverage which are not necessarily well matched to when fires are most active (MISR being confined to morning observations for example).

  15. Incorporating GIS data into an agent-based model to support planning policy making for the development of creative industries

    NASA Astrophysics Data System (ADS)

    Liu, Helin; Silva, Elisabete A.; Wang, Qian

    2016-07-01

    This paper presents an extension to the agent-based model "Creative Industries Development-Urban Spatial Structure Transformation" by incorporating GIS data. Three agent classes, creative firms, creative workers and urban government, are considered in the model, and the spatial environment represents a set of GIS data layers (i.e. road network, key housing areas, land use). With the goal to facilitate urban policy makers to draw up policies locally and optimise the land use assignment in order to support the development of creative industries, the improved model exhibited its capacity to assist the policy makers conducting experiments and simulating different policy scenarios to see the corresponding dynamics of the spatial distributions of creative firms and creative workers across time within a city/district. The spatiotemporal graphs and maps record the simulation results and can be used as a reference by the policy makers to adjust land use plans adaptively at different stages of the creative industries' development process.

  16. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.

  17. Evolving optimised decision rules for intrusion detection using particle swarm paradigm

    NASA Astrophysics Data System (ADS)

    Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.

    2012-12-01

    The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.

  18. Global optimisation methods for poroelastic material characterisation using a clamped sample in a Kundt tube setup

    NASA Astrophysics Data System (ADS)

    Vanhuyse, Johan; Deckers, Elke; Jonckheere, Stijn; Pluymers, Bert; Desmet, Wim

    2016-02-01

    The Biot theory is commonly used for the simulation of the vibro-acoustic behaviour of poroelastic materials. However, it relies on a number of material parameters. These can be hard to characterize and require dedicated measurement setups, yielding a time-consuming and costly characterisation. This paper presents a characterisation method which is able to identify all material parameters using only an impedance tube. The method relies on the assumption that the sample is clamped within the tube, that the shear wave is excited and that the acoustic field is no longer one-dimensional. This paper numerically shows the potential of the developed method. It therefore performs a sensitivity analysis of the quantification parameters, i.e. reflection coefficients and relative pressures, and a parameter estimation using global optimisation methods. A 3-step procedure is developed and validated. It is shown that even in the presence of numerically simulated noise this procedure leads to a robust parameter estimation.

  19. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  20. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  1. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.

  2. The Thistle Field - Analysis of its past performance and optimisation of its future development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayat, M.G.; Tehrani, D.H.

    1985-01-01

    The Thistle Field geology and its reservoir performance over the past six years have been reviewed. The latest reservoir simulation study of the field, covering the performance history-matching, and the conclusions of various prediction cases are reported. The special features of PORES, Britoil in-house 3D 3-phase fully implicit numerical simulator and its modeling aids as applied to the Thistle Field are presented.

  3. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  4. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  5. Unexpected finite size effects in interfacial systems: Why bigger is not always better—Increase in uncertainty of surface tension with bulk phase width

    NASA Astrophysics Data System (ADS)

    Longford, Francis G. J.; Essex, Jonathan W.; Skylaris, Chris-Kriton; Frey, Jeremy G.

    2018-06-01

    We present an unexpected finite size effect affecting interfacial molecular simulations that is proportional to the width-to-surface-area ratio of the bulk phase Ll/A. This finite size effect has a significant impact on the variance of surface tension values calculated using the virial summation method. A theoretical derivation of the origin of the effect is proposed, giving a new insight into the importance of optimising system dimensions in interfacial simulations. We demonstrate the consequences of this finite size effect via a new way to estimate the surface energetic and entropic properties of simulated air-liquid interfaces. Our method is based on macroscopic thermodynamic theory and involves comparing the internal energies of systems with varying dimensions. We present the testing of these methods using simulations of the TIP4P/2005 water forcefield and a Lennard-Jones fluid model of argon. Finally, we provide suggestions of additional situations, in which this finite size effect is expected to be significant, as well as possible ways to avoid its impact.

  6. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis

    PubMed Central

    Waterfall, Christy M.; Cobb, Benjamin D.

    2001-01-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702

  7. Design and experimental validation of linear and nonlinear vehicle steering control strategies

    NASA Astrophysics Data System (ADS)

    Menhour, Lghani; Lechner, Daniel; Charara, Ali

    2012-06-01

    This paper proposes the design of three control laws dedicated to vehicle steering control, two based on robust linear control strategies and one based on nonlinear control strategies, and presents a comparison between them. The two robust linear control laws (indirect and direct methods) are built around M linear bicycle models, each of these control laws is composed of two M proportional integral derivative (PID) controllers: one M PID controller to control the lateral deviation and the other M PID controller to control the vehicle yaw angle. The indirect control law method is designed using an oscillation method and a nonlinear optimisation subject to H ∞ constraint. The direct control law method is designed using a linear matrix inequality optimisation in order to achieve H ∞ performances. The nonlinear control method used for the correction of the lateral deviation is based on a continuous first-order sliding-mode controller. The different methods are designed using a linear bicycle vehicle model with variant parameters, but the aim is to simulate the nonlinear vehicle behaviour under high dynamic demands with a four-wheel vehicle model. These steering vehicle controls are validated experimentally using the data acquired using a laboratory vehicle, Peugeot 307, developed by National Institute for Transport and Safety Research - Department of Accident Mechanism Analysis Laboratory's (INRETS-MA) and their performance results are compared. Moreover, an unknown input sliding-mode observer is introduced to estimate the road bank angle.

  8. Development and validation of real-time simulation of X-ray imaging with respiratory motion.

    PubMed

    Vidal, Franck P; Villard, Pierre-Frédéric

    2016-04-01

    We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  10. High-End Concept Based on Hypersonic Two-Stage Rocket and Electro-Magnetic Railgun to Launch Micro-Satellites Into Low-Earth

    NASA Astrophysics Data System (ADS)

    Bozic, O.; Longo, J. M.; Giese, P.; Behren, J.

    2005-02-01

    The electromagnetic railgun technology appears to be an interesting alternative to launch small payloads into Low Earth Orbit (LEO), as this may introduce lower launch costs. A high-end solution, based upon present state of the art technology, has been investigated to derive the technical boundary conditions for the application of such a new system. This paper presents the main concept and the design aspects of such propelled projectile with special emphasis on flight mechanics, aero-/thermodynamics, materials and propulsion characteristics. Launch angles and trajectory optimisation analyses are carried out by means of 3 degree of freedom simulations (3DOF). The aerodynamic form of the projectile is optimised to provoke minimum drag and low heat loads. The surface temperature distribution for critical zones is calculated with DLR developed Navier-Stokes codes TAU, HOTSOSE, whereas the engineering tool HF3T is used for time dependent calculations of heat loads and temperatures on project surface and inner structures. Furthermore, competing propulsions systems are considered for the rocket engines of both stages. The structural mass is analysed mostly on the basis of carbon fibre reinforced materials as well as classical aerospace metallic materials. Finally, this paper gives a critical overview of the technical feasibility and cost of small rockets for such missions. Key words: micro-satellite, two-stage-rocket, railgun, rocket-engines, aero/thermodynamic, mass optimization

  11. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

  12. Design and analysis of compact MMIC switches utilising GaAs pHEMTs in 3D multilayer technology

    NASA Astrophysics Data System (ADS)

    Haris, Norshakila; Kyabaggu, Peter B. K.; Alim, Mohammad A.; Rezazadeh, Ali A.

    2017-05-01

    In this paper, we demonstrate for the first time the implementation of three-dimensional multilayer technology on GaAs-based pseudomorphic high electron mobility transistor (pHEMT) switches. Two types of pHEMT switches are considered, namely single-pole single-throw (SPST) and single-pole double-throw (SPDT). The design and analysis of the devices are demonstrated first through a simulation of the industry-recognised standard model, TriQuint’s Own Model—Level 3, developed by TriQuint Semiconductor, Inc. From the simulation analysis, three optimised SPST and SPDT pHEMT switches which can address applications ranging from L to X bands, are fabricated and tested. The performance of the pHEMT switches using multilayer technology are comparable to those of the current state-of-the-art pHEMT switches, while simultaneously offering compact circuits with the advantages of integration with other MMIC components.

  13. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  14. Strong constraint on modelled global carbon uptake using solar-induced chlorophyll fluorescence data.

    PubMed

    MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias

    2018-01-31

    Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.

  15. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  16. Optimised detection of mitochondrial DNA strand breaks.

    PubMed

    Hanna, Rebecca; Crowther, Jonathan M; Bulsara, Pallav A; Wang, Xuying; Moore, David J; Birch-Machin, Mark A

    2018-05-04

    Intrinsic and extrinsic factors that induce cellular oxidative stress damage tissue integrity and promote ageing, resulting in accumulative strand breaks to the mitochondrial DNA (mtDNA) genome. Limited repair mechanisms and close proximity to superoxide generation make mtDNA a prominent biomarker of oxidative damage. Using human DNA we describe an optimised long-range qPCR methodology that sensitively detects mtDNA strand breaks relative to a suite of short mitochondrial and nuclear DNA housekeeping amplicons, which control for any variation in mtDNA copy number. An application is demonstrated by detecting 16-36-fold mtDNA damage in human skin cells induced by hydrogen peroxide and solar simulated radiation. Copyright © 2018 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  17. Sustainable Mining Land Use for Lignite Based Energy Projects

    NASA Astrophysics Data System (ADS)

    Dudek, Michal; Krysa, Zbigniew

    2017-12-01

    This research aims to discuss complex lignite based energy projects economic viability and its impact on sustainable land use with respect to project risk and uncertainty, economics, optimisation (e.g. Lerchs and Grossmann) and importance of lignite as fuel that may be expressed in situ as deposit of energy. Sensitivity analysis and simulation consist of estimated variable land acquisition costs, geostatistics, 3D deposit block modelling, electricity price considered as project product price, power station efficiency and power station lignite processing unit cost, CO2 allowance costs, mining unit cost and also lignite availability treated as lignite reserves kriging estimation error. Investigated parameters have nonlinear influence on results so that economically viable amount of lignite in optimal pit varies having also nonlinear impact on land area required for mining operation.

  18. The development and optimisation of a primary care-based whole system complex intervention (CARE Plus) for patients with multimorbidity living in areas of high socioeconomic deprivation

    PubMed Central

    O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally

    2016-01-01

    Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113

  19. Computer simulation of electron flow in linear-beam microwave tubes

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit

    1990-12-01

    The computer simulation of electron flow in linear-beam microwave tubes, such as a travelling-wave tube (TWT) and klystron, is used for designing and optimising the electron gun and collector and for analysing the large-signal beam-wave interaction phenomenon. Major aspects of simulation of electron flow in static and rf fields present in such tubes are discussed. Some advancements made in this respect and results obtained from computer programs developed by the research group at CEERI for a gridded electron gun, depressed collector, and large-signal analysis of TWT and klystron are presented.

  20. Numerical Optimisation in Non Reacting Conditions of the Injector Geometry for a Continuous Detonation Wave Rocket Engine

    NASA Astrophysics Data System (ADS)

    Gaillard, T.; Davidenko, D.; Dupoirieux, F.

    2015-06-01

    The paper presents the methodology and the results of a numerical study, which is aimed at the investigation and optimisation of different means of fuel and oxidizer injection adapted to rocket engines operating in the rotating detonation mode. As the simulations are achieved at the local scale of a single injection element, only one periodic pattern of the whole geometry can be calculated so that the travelling detonation waves and the associated chemical reactions can not be taken into account. Here, separate injection of fuel and oxidizer is considered because premixed injection is handicapped by the risk of upstream propagation of the detonation wave. Different associations of geometrical periodicity and symmetry are investigated for the injection elements distributed over the injector head. To analyse the injection and mixing processes, a nonreacting 3D flow is simulated using the LES approach. Performance of the studied configurations is analysed using the results on instantaneous and mean flowfields as well as by comparing the mixing efficiency and the total pressure recovery evaluated for different configurations.

  1. Optimization of a multi-channel parabolic guide for the material science diffractometer STRESS-SPEC at FRM II

    NASA Astrophysics Data System (ADS)

    Rebelo Kornmeier, Joana; Ostermann, Andreas; Hofmann, Michael; Gibmeier, Jens

    2014-02-01

    Neutron strain diffractometers usually use slits to define a gauge volume within engineering samples. In this study a multi-channel parabolic neutron guide was developed to be used instead of the primary slit to minimise the loss of intensity and vertical definition of the gauge volume when using slits placed far away from the measurement position in bulky components. The major advantage of a focusing guide is that the maximum flux is not at the exit of the guide as for a slit system but at the focal point relatively far away from the exit of the guide. Monte Carlo simulations were used to optimise the multi-channel parabolic guide with respect to the instrument characteristics of the diffractometer STRESS-SPEC at the FRM II neutron source. Also the simulations are in excellent agreement with experimental measurements using the optimised multi-channel parabolic guide at the neutron diffractometer. In addition the performance of the guide was compared to the standard slit setup at STRESS-SPEC using a single bead weld sample used in earlier round robin tests for residual strain measurements.

  2. Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.

    PubMed

    Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P

    2017-01-01

    To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.

  3. Optimising the Blended Learning Environment: The Arab Open University Experience

    ERIC Educational Resources Information Center

    Hamdi, Tahrir; Abu Qudais, Mohammed

    2018-01-01

    This paper will offer some insights into possible ways to optimise the blended learning environment based on experience with this modality of teaching at Arab Open University/Jordan branch and also by reflecting upon the results of several meta-analytical studies, which have shown blended learning environments to be more effective than their face…

  4. Predictive Array Design. A method for sampling combinatorial chemistry library space.

    PubMed

    Lipkin, M J; Rose, V S; Wood, J

    2002-01-01

    A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.

  5. How should we build a generic open-source water management simulator?

    NASA Astrophysics Data System (ADS)

    Khadem, M.; Meier, P.; Rheinheimer, D. E.; Padula, S.; Matrosov, E.; Selby, P. D.; Knox, S.; Harou, J. J.

    2014-12-01

    Increasing water needs for agriculture, industry and cities mean effective and flexible water resource system management tools will remain in high demand. Currently many regions or countries use simulators that have been adapted over time to their unique system properties and water management rules and realities. Most regions operate with a preferred short-list of water management and planning decision support systems. Is there scope for a simulator, shared within the water management community, that could be adapted to different contexts, integrate community contributions, and connect to generic data and model management software? What role could open-source play in such a project? How could a genericuser-interface and data/model management software sustainably be attached to this model or suite of models? Finally, how could such a system effectively leverage existing model formulations, modeling technologies and software? These questions are addressed by the initial work presented here. We introduce a generic water resource simulation formulation that enables and integrates both rule-based and optimization driven technologies. We suggest how it could be linked to other sub-models allowing for detailed agent-based simulation of water management behaviours. An early formulation is applied as an example to the Thames water resource system in the UK. The model uses centralised optimisation to calculate allocations but allows for rule-based operations as well in an effort to represent observed behaviours and rules with fidelity. The model is linked through import/export commands to a generic network model platform named Hydra. Benefits and limitations of the approach are discussed and planned work and potential use cases are outlined.

  6. Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels

    NASA Astrophysics Data System (ADS)

    Vandersickel, A.; Wright, Y. M.; Boulouchos, K.

    2013-12-01

    Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.

  7. Improved power steering with double and triple ring waveguide systems: the impact of the operating frequency.

    PubMed

    Kok, H P; de Greef, M; Borsboom, P P; Bel, A; Crezee, J

    2011-01-01

    Regional hyperthermia systems with 3D power steering have been introduced to improve tumour temperatures. The 3D 70-MHz AMC-8 system has two rings of four waveguides. The aim of this study is to evaluate whether T(90) will improve by using a higher operating frequency and whether further improvement is possible by adding a third ring. Optimised specific absorption rate (SAR) distributions were evaluated for a centrally located target in tissue-equivalent phantoms, and temperature optimisation was performed for five cervical carcinoma patients with constraints to normal tissue temperatures. The resulting T(90) and the thermal iso-effect dose (i.e. the number of equivalent min at 43°C) were evaluated and compared to the 2D 70-MHz AMC-4 system with a single ring of four waveguides. FDTD simulations were performed at 2.5 × 2.5 × 5 mm(3) resolution. The applied frequencies were 70, 100, 120, 130, 140 and 150 MHz. Optimised SAR distributions in phantoms showed an optimal SAR distribution at 140 MHz. For the patient simulations, an optimal increase in T(90) was observed at 130 MHz. For a two-ring system at 70 MHz the gain in T(90) was about 0.5°C compared to the AMC-4 system, averaged over the five patients. At 130 MHz the average gain in T(90) was ~1.5°C and ~2°C for a two and three-ring system, respectively. This implies an improvement of the thermal iso-effect dose with a factor ~12 and ~30, respectively. Simulations showed that a 130-MHz two-ring waveguide system yields significantly higher tumour temperatures compared to 70-MHz single-ring and double-ring waveguide systems. Temperatures were further improved with a 130-MHz triple-ring system.

  8. Prediction-based sampled-data H∞ controller design for attitude stabilisation of a rigid spacecraft with disturbances

    NASA Astrophysics Data System (ADS)

    Zhu, Baolong; Zhang, Zhiping; Zhou, Ding; Ma, Jie; Li, Shunli

    2017-08-01

    This paper investigates the H∞ control problem of the attitude stabilisation of a rigid spacecraft with external disturbances using prediction-based sampled-data control strategy. Aiming to achieve a 'virtual' closed-loop system, a type of parameterised sampled-data controller is designed by introducing a prediction mechanism. The resultant closed-loop system is equivalent to a hybrid system featured by a continuous-time and an impulsive differential system. By using a time-varying Lyapunov functional, a generalised bounded real lemma (GBRL) is first established for a kind of impulsive differential system. Based on this GBRL and Lyapunov functional approach, a sufficient condition is derived to guarantee the closed-loop system to be asymptotically stable and to achieve a prescribed H∞ performance. In addition, the controller parameter tuning is cast into a convex optimisation problem. Simulation and comparative results are provided to illustrate the effectiveness of the developed control scheme.

  9. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  10. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    NASA Astrophysics Data System (ADS)

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  11. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.

    PubMed

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-23

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  12. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  13. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    PubMed Central

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-01-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585

  14. Crystal structure optimisation using an auxiliary equation of state

    NASA Astrophysics Data System (ADS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  15. Vertical transportation systems embedded on shuffled frog leaping algorithm for manufacturing optimisation problems in industries.

    PubMed

    Aungkulanon, Pasura; Luangpaiboon, Pongchanun

    2016-01-01

    Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.

  16. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  17. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  18. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  19. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  20. Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms.

    PubMed

    Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter

    2007-01-01

    A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.

  1. Modulation aware cluster size optimisation in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  2. Population pharmacokinetic modelling of rupatadine solution in 6-11 year olds and optimisation of the experimental design in younger children.

    PubMed

    Santamaría, Eva; Estévez, Javier Alejandro; Riba, Jordi; Izquierdo, Iñaki; Valle, Marta

    2017-01-01

    To optimise a pharmacokinetic (PK) study design of rupatadine for 2-5 year olds by using a population PK model developed with data from a study in 6-11 year olds. The design optimisation was driven by the need to avoid children's discomfort in the study. PK data from 6-11 year olds with allergic rhinitis available from a previous study were used to construct a population PK model which we used in simulations to assess the dose to administer in a study in 2-5 year olds. In addition, an optimal design approach was used to determine the most appropriate number of sampling groups, sampling days, total samples and sampling times. A two-compartmental model with first-order absorption and elimination, with clearance dependent on weight adequately described the PK of rupatadine for 6-11 year olds. The dose selected for a trial in 2-5 year olds was 2.5 mg, as it provided a Cmax below the 3 ng/ml threshold. The optimal study design consisted of four groups of children (10 children each), a maximum sampling window of 2 hours in two clinic visits for drawing three samples on day 14 and one on day 28 coinciding with the final examination of the study. A PK study design was optimised in order to prioritise avoidance of discomfort for enrolled 2-5 year olds by taking only four blood samples from each child and minimising the length of hospital stays.

  3. The optimisation, design and verification of feed horn structures for future Cosmic Microwave Background missions

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten

    2016-05-01

    In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.

  4. Optimisation of shape kernel and threshold in image-processing motion analysers.

    PubMed

    Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G

    2001-09-01

    The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.

  5. Model structure identification for wastewater treatment simulation based on computational fluid dynamics.

    PubMed

    Alex, J; Kolisch, G; Krause, K

    2002-01-01

    The objective of this presented project is to use the results of an CFD simulation to automatically, systematically and reliably generate an appropriate model structure for simulation of the biological processes using CSTR activated sludge compartments. Models and dynamic simulation have become important tools for research but also increasingly for the design and optimisation of wastewater treatment plants. Besides the biological models several cases are reported about the application of computational fluid dynamics ICFD) to wastewater treatment plants. One aim of the presented method to derive model structures from CFD results is to exclude the influence of empirical structure selection to the result of dynamic simulations studies of WWTPs. The second application of the approach developed is the analysis of badly performing treatment plants where the suspicion arises that bad flow behaviour such as short cut flows is part of the problem. The method suggested requires as the first step the calculation of fluid dynamics of the biological treatment step at different loading situations by use of 3-dimensional CFD simulation. The result of this information is used to generate a suitable model structure for conventional dynamic simulation of the treatment plant by use of a number of CSTR modules with a pattern of exchange flows between the tanks automatically. The method is explained in detail and the application to the WWTP Wuppertal Buchenhofen is presented.

  6. An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Heydari, Ali; Balakrishnan, S. N.

    2014-12-01

    The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.

  7. An FEM-based AI approach to model parameter identification for low vibration modes of wind turbine composite rotor blades

    NASA Astrophysics Data System (ADS)

    Navadeh, N.; Goroshko, I. O.; Zhuk, Y. A.; Fallah, A. S.

    2017-11-01

    An approach to construction of a beam-type simplified model of a horizontal axis wind turbine composite blade based on the finite element method is proposed. The model allows effective and accurate description of low vibration bending modes taking into account the effects of coupling between flapwise and lead-lag modes of vibration transpiring due to the non-uniform distribution of twist angle in the blade geometry along its length. The identification of model parameters is carried out on the basis of modal data obtained by more detailed finite element simulations and subsequent adoption of the 'DIRECT' optimisation algorithm. Stable identification results were obtained using absolute deviations in frequencies and in modal displacements in the objective function and additional a priori information (boundedness and monotony) on the solution properties.

  8. Limitations of subjective cognitive load measures in simulation-based procedural training.

    PubMed

    Naismith, Laura M; Cheung, Jeffrey J H; Ringsted, Charlotte; Cavalcanti, Rodrigo B

    2015-08-01

    The effective implementation of cognitive load theory (CLT) to optimise the instructional design of simulation-based training requires sensitive and reliable measures of cognitive load. This mixed-methods study assessed relationships between commonly used measures of total cognitive load and the extent to which these measures reflected participants' experiences of cognitive load in simulation-based procedural skills training. Two groups of medical residents (n = 38) completed three questionnaires after participating in simulation-based procedural skills training sessions: the Paas Cognitive Load Scale; the NASA Task Load Index (TLX), and a cognitive load component (CLC) questionnaire we developed to assess total cognitive load as the sum of intrinsic load (how complex the task is), extraneous load (how the task is presented) and germane load (how the learner processes the task for learning). We calculated Pearson's correlation coefficients to assess agreement among these instruments. Group interviews explored residents' perceptions about how the simulation sessions contributed to their total cognitive load. Interviews were audio-recorded, transcribed and subjected to qualitative content analysis. Total cognitive load scores differed significantly according to the instrument used to assess them. In particular, there was poor agreement between the Paas Scale and the TLX. Quantitative and qualitative findings supported intrinsic cognitive load as synonymous with mental effort (Paas Scale), mental demand (TLX) and task difficulty and complexity (CLC questionnaire). Additional qualitative themes relating to extraneous and germane cognitive loads were not reflected in any of the questionnaires. The Paas Scale, TLX and CLC questionnaire appear to be interchangeable as measures of intrinsic cognitive load, but not of total cognitive load. A more complete understanding of the sources of extraneous and germane cognitive loads in simulation-based training contexts is necessary to determine how best to measure and assess their effects on learning and performance outcomes. © 2015 John Wiley & Sons Ltd.

  9. Simulation of geothermal water extraction in heterogeneous reservoirs using dynamic unstructured mesh optimisation

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.

    2017-12-01

    It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.

  10. Noise in NC-AFM measurements with significant tip–sample interaction

    PubMed Central

    Lübbe, Jannis; Temmen, Matthias

    2016-01-01

    The frequency shift noise in non-contact atomic force microscopy (NC-AFM) imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip–sample interactions. The total noise power spectral density D Δ f(f m) is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip–sample interaction, by the coupling between the amplitude and tip–sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL) detector used for frequency demodulation. Here, we measure D Δ f(f m) for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip–sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops. PMID:28144538

  11. Shape optimisation of an underwater Bernoulli gripper

    NASA Astrophysics Data System (ADS)

    Flint, Tim; Sellier, Mathieu

    2015-11-01

    In this work, we are interested in maximising the suction produced by an underwater Bernoulli gripper. Bernoulli grippers work by exploiting low pressure regions caused by the acceleration of a working fluid through a narrow channel, between the gripper and a surface, to provide a suction force. This mechanism allows for non-contact adhesion to various surfaces and may be used to hold a robot to the hull of a ship while it inspects welds for example. A Bernoulli type pressure analysis was used to model the system with a Darcy friction factor approximation to include the effects of frictional losses. The analysis involved a constrained optimisation in order to avoid cavitation within the mechanism which would result in decreased performance and damage to surfaces. A sensitivity based method and gradient descent approach was used to find the optimum shape of a discretised surface. The model's accuracy has been quantified against finite volume computational fluid dynamics simulation (ANSYS CFX) using the k- ω SST turbulence model. Preliminary results indicate significant improvement in suction force when compared to a simple geometry by retaining a pressure just above that at which cavitation would occur over as much surface area as possible. Doctoral candidate in the Mechanical Engineering Department of the University of Canterbury, New Zealand.

  12. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE PAGES

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain; ...

    2017-09-23

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  13. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  14. Assessing the impact of typeface design in a text-rich automotive user interface.

    PubMed

    Reimer, Bryan; Mehler, Bruce; Dobres, Jonathan; Coughlin, Joseph F; Matteson, Steve; Gould, David; Chahine, Nadine; Levantovsky, Vladimir

    2014-01-01

    Text-rich driver-vehicle interfaces are increasingly common in new vehicles, yet the effects of different typeface characteristics on task performance in this brief off-road based glance context remains sparsely examined. Subjects completed menu selection tasks while in a driving simulator. Menu text was set either in a 'humanist' or 'square grotesque' typeface. Among men, use of the humanist typeface resulted in a 10.6% reduction in total glance time as compared to the square grotesque typeface. Total response time and number of glances showed similar reductions. The impact of typeface was either more modest or not apparent for women. Error rates for both males and females were 3.1% lower for the humanist typeface. This research suggests that optimised typefaces may mitigate some interface demands. Future work will need to assess whether other typeface characteristics can be optimised to further reduce demand, improve legibility, increase usability and help meet new governmental distraction guidelines. Practitioner Summary: Text-rich in-vehicle interfaces are increasingly common, but the effects of typeface on task performance remain sparsely studied. We show that among male drivers, menu selection tasks are completed with 10.6% less visual glance time when text is displayed in a 'humanist' typeface, as compared to a 'square grotesque'.

  15. Assessing the impact of typeface design in a text-rich automotive user interface

    PubMed Central

    Reimer, Bryan; Mehler, Bruce; Dobres, Jonathan; Coughlin, Joseph F.; Matteson, Steve; Gould, David; Chahine, Nadine; Levantovsky, Vladimir

    2014-01-01

    Text-rich driver–vehicle interfaces are increasingly common in new vehicles, yet the effects of different typeface characteristics on task performance in this brief off-road based glance context remains sparsely examined. Subjects completed menu selection tasks while in a driving simulator. Menu text was set either in a ‘humanist’ or ‘square grotesque’ typeface. Among men, use of the humanist typeface resulted in a 10.6% reduction in total glance time as compared to the square grotesque typeface. Total response time and number of glances showed similar reductions. The impact of typeface was either more modest or not apparent for women. Error rates for both males and females were 3.1% lower for the humanist typeface. This research suggests that optimised typefaces may mitigate some interface demands. Future work will need to assess whether other typeface characteristics can be optimised to further reduce demand, improve legibility, increase usability and help meet new governmental distraction guidelines. Practitioner Summary: Text-rich in-vehicle interfaces are increasingly common, but the effects of typeface on task performance remain sparsely studied. We show that among male drivers, menu selection tasks are completed with 10.6% less visual glance time when text is displayed in a ‘humanist’ typeface, as compared to a ‘square grotesque’. PMID:25075429

  16. Noise in NC-AFM measurements with significant tip-sample interaction.

    PubMed

    Lübbe, Jannis; Temmen, Matthias; Rahe, Philipp; Reichling, Michael

    2016-01-01

    The frequency shift noise in non-contact atomic force microscopy (NC-AFM) imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip-sample interactions. The total noise power spectral density D Δ f ( f m ) is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip-sample interaction, by the coupling between the amplitude and tip-sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL) detector used for frequency demodulation. Here, we measure D Δ f ( f m ) for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip-sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops.

  17. Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context

    NASA Astrophysics Data System (ADS)

    Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian

    2016-05-01

    The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.

  18. Biomass supply chain optimisation for Organosolv-based biorefineries.

    PubMed

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    NASA Astrophysics Data System (ADS)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  20. Observer-based perturbation extremum seeking control with input constraints for direct-contact membrane distillation process

    NASA Astrophysics Data System (ADS)

    Eleiwi, Fadi; Laleg-Kirati, Taous Meriem

    2018-06-01

    An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.

  1. On the dynamic rounding-off in analogue and RF optimal circuit sizing

    NASA Astrophysics Data System (ADS)

    Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena

    2014-04-01

    Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.

  2. Optimisation of active suspension control inputs for improved performance of active safety systems

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor

    2018-01-01

    A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.

  3. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  4. INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?

    PubMed

    Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P

    2015-01-01

    Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.

  5. Dark Energy Survey Year 1 Results: galaxy mock catalogues for BAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, S.; et al.

    Mock catalogues are a crucial tool in the analysis of galaxy surveys data, both for the accurate computation of covariance matrices, and for the optimisation of analysis methodology and validation of data sets. In this paper, we present a set of 1800 galaxy mock catalogues designed to match the Dark Energy Survey Year-1 BAO sample (Crocce et al. 2017) in abundance, observational volume, redshift distribution and uncertainty, and redshift dependent clustering. The simulated samples were built upon HALOGEN (Avila et al. 2015) halo catalogues, based on a $2LPT$ density field with an exponential bias. For each of them, a lightconemore » is constructed by the superposition of snapshots in the redshift range $0.45« less

  6. A big-data model for multi-modal public transportation with application to macroscopic control and optimisation

    NASA Astrophysics Data System (ADS)

    Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert

    2015-11-01

    This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.

  7. Experimental research and numerical optimisation of multi-point sheet metal forming implementation using a solid elastic cushion system

    NASA Astrophysics Data System (ADS)

    Tolipov, A. A.; Elghawail, A.; Shushing, S.; Pham, D.; Essa, K.

    2017-09-01

    There is a growing demand for flexible manufacturing techniques that meet the rapid changes in customer needs. A finite element analysis numerical optimisation technique was used to optimise the multi-point sheet forming process. Multi-point forming (MPF) is a flexible sheet metal forming technique where the same tool can be readily changed to produce different parts. The process suffers from some geometrical defects such as wrinkling and dimpling, which have been found to be the cause of the major surface quality problems. This study investigated the influence of parameters such as the elastic cushion hardness, blank holder force, coefficient of friction, cushion thickness and radius of curvature, on the quality of parts formed in a flexible multi-point stamping die. For those reasons, in this investigation, a multipoint forming stamping process using a blank holder was carried out in order to study the effects of the wrinkling, dimpling, thickness variation and forming force. The aim was to determine the optimum values of these parameters. Finite element modelling (FEM) was employed to simulate the multi-point forming of hemispherical shapes. Using the response surface method, the effects of process parameters on wrinkling, maximum deviation from the target shape and thickness variation were investigated. The results show that elastic cushion with proper thickness and polyurethane with the hardness of Shore A90. It has also been found that the application of lubrication cans improve the shape accuracy of the formed workpiece. These final results were compared with the numerical simulation results of the multi-point forming for hemispherical shapes using a blank-holder and it was found that using cushion hardness realistic to reduce wrinkling and maximum deviation.

  8. Metric optimisation for analogue forecasting by simulated annealing

    NASA Astrophysics Data System (ADS)

    Bliefernicht, J.; Bárdossy, A.

    2009-04-01

    It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.

  9. The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.

    PubMed

    Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H

    2017-12-01

    To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Systemic solutions for multi-benefit water and environmental management.

    PubMed

    Everard, Mark; McInnes, Robert

    2013-09-01

    The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Directionality compensation for linear multivariable anti-windup synthesis

    NASA Astrophysics Data System (ADS)

    Adegbege, Ambrose A.; Heath, William P.

    2015-11-01

    We develop new synthesis procedures for optimising anti-windup control applicable to open-loop exponentially stable multivariable plants subject to hard bounds on the inputs. The optimising anti-windup control falls into a class of compensator commonly termed directionality compensation. The computation of the control involves the online solution of a low-order quadratic programme in place of simple saturation. We exploit the structure of the quadratic programme to incorporate directionality information into the offline anti-windup synthesis using a decoupled architecture similar to that proposed in the literature for anti-windup schemes with simple saturation. We demonstrate the effectiveness of the design compared to several schemes using a simulated example. Preliminary results of this work have been published in the proceedings of the IEEE Conference on Decision and Control, Orlando, 2011 (Adegbege & Heath, 2011a).

  12. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  13. Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu

    2017-01-01

    In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.

  14. Thermodynamic properties of solvated peptides from selective integrated tempering sampling with a new weighting factor estimation algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Lin; Xie, Liangxu; Yang, Mingjun

    2017-04-01

    Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.

  15. A chaotic model for advertising diffusion problem with competition

    NASA Astrophysics Data System (ADS)

    Ip, W. H.; Yung, K. L.; Wang, Dingwei

    2012-08-01

    In this article, the author extends Dawid and Feichtinger's chaotic advertising diffusion model into the duopoly case. A computer simulation system is used to test this enhanced model. Based on the analysis of simulation results, it is found that the best advertising strategy in duopoly is to increase the advertising investment to reach the best Win-Win situation where the oscillation of market portion will not occur. In order to effectively arrive at the best situation, we define a synthetic index and two thresholds. An estimation method for the parameters of the index and thresholds is proposed in this research. We can reach the Win-Win situation by simply selecting the control parameters to make the synthetic index close to the threshold of min-oscillation state. The numerical example and computational results indicated that the proposed chaotic model is useful to describe and analyse advertising diffusion process in duopoly, it is an efficient tool for the selection and optimisation of advertising strategy.

  16. Alternative Zoning Scenarios for Regional Sustainable Land Use Controls in China: A Knowledge-Based Multiobjective Optimisation Model

    PubMed Central

    Xia, Yin; Liu, Dianfeng; Liu, Yaolin; He, Jianhua; Hong, Xiaofeng

    2014-01-01

    Alternative land use zoning scenarios provide guidance for sustainable land use controls. This study focused on an ecologically vulnerable catchment on the Loess Plateau in China, proposed a novel land use zoning model, and generated alternative zoning solutions to satisfy the various requirements of land use stakeholders and managers. This model combined multiple zoning objectives, i.e., maximum zoning suitability, maximum planning compatibility and maximum spatial compactness, with land use constraints by using goal programming technique, and employed a modified simulated annealing algorithm to search for the optimal zoning solutions. The land use zoning knowledge was incorporated into the initialisation operator and neighbourhood selection strategy of the simulated annealing algorithm to improve its efficiency. The case study indicates that the model is both effective and robust. Five optimal zoning scenarios of the study area were helpful for satisfying the requirements of land use controls in loess hilly regions, e.g., land use intensification, agricultural protection and environmental conservation. PMID:25170679

  17. Free energy, precision and learning: the role of cholinergic neuromodulation

    PubMed Central

    Moran, Rosalyn J.; Campo, Pablo; Symmonds, Mkael; Stephan, Klaas E.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Acetylcholine (ACh) is a neuromodulatory transmitter implicated in perception and learning under uncertainty. This study combined computational simulations and pharmaco-electroencephalography in humans, to test a formulation of perceptual inference based upon the free energy principle. This formulation suggests that acetylcholine enhances the precision of bottom-up synaptic transmission in cortical hierarchies by optimising the gain of supragranular pyramidal cells. Simulations of a mismatch negativity paradigm predicted a rapid trial-by-trial suppression of evoked sensory prediction error (PE) responses that is attenuated by cholinergic neuromodulation. We confirmed this prediction empirically with a placebo-controlled study of cholinesterase inhibition. Furthermore – using dynamic causal modelling – we found that drug-induced differences in PE responses could be explained by gain modulation in supragranular pyramidal cells in primary sensory cortex. This suggests that acetylcholine adaptively enhances sensory precision by boosting bottom-up signalling when stimuli are predictable, enabling the brain to respond optimally under different levels of environmental uncertainty. PMID:23658161

  18. An integrated and dynamic optimisation model for the multi-level emergency logistics network in anti-bioterrorism system

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Zhao, Lindu

    2012-08-01

    Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.

  19. Determining the Number of Participants Needed for the Usability Evaluation of E-Learning Resources: A Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Davids, Mogamat Razeen; Harvey, Justin; Halperin, Mitchell L.; Chikte, Usuf M. E.

    2015-01-01

    The usability of computer interfaces has a major influence on learning. Optimising the usability of e-learning resources is therefore essential. However, this may be neglected because of time and monetary constraints. User testing is a common approach to usability evaluation and involves studying typical end-users interacting with the application…

  20. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  1. Stability and optimised H∞ control of tripped and untripped vehicle rollover

    NASA Astrophysics Data System (ADS)

    Jin, Zhilin; Zhang, Lei; Zhang, Jiale; Khajepour, Amir

    2016-10-01

    Vehicle rollover is a serious traffic accident. In order to accurately evaluate the possibility of untripped and some special tripped vehicle rollovers, and to prevent vehicle rollover under unpredictable variations of parameters and harsh driving conditions, a new rollover index and an anti-roll control strategy are proposed in this paper. Taking deflections of steering and suspension induced by the roll at the axles into consideration, a six degrees of freedom dynamic model is established, including lateral, yaw, roll, and vertical motions of sprung and unsprung masses. From the vehicle dynamics theory, a new rollover index is developed to predict vehicle rollover risk under both untripped and special tripped situations. This new rollover index is validated by Carsim simulations. In addition, an H-infinity controller with electro hydraulic brake system is optimised by genetic algorithm to improve the anti-rollover performance of the vehicle. The stability and robustness of the active rollover prevention control system are analysed by some numerical simulations. The results show that the control system can improve the critical speed of vehicle rollover obviously, and has a good robustness for variations in the number of passengers and longitude position of the centre of gravity.

  2. A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications

    NASA Astrophysics Data System (ADS)

    Robin, J.; Tanter, M.; Pernot, M.

    2017-09-01

    Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.

  3. Development of a decision support system for small reservoir irrigation systems in rainfed and drought prone areas.

    PubMed

    Balderama, Orlando F

    2010-01-01

    An integrated computer program called Cropping System and Water Management Model (CSWM) with a three-step feature (expert system-simulation-optimization) was developed to address a range of decision support for rainfed farming, i.e. crop selection, scheduling and optimisation. The system was used for agricultural planning with emphasis on sustainable agriculture in the rainfed areas through the use of small farm reservoirs for increased production and resource conservation and management. The application of the model was carried out using crop, soil, and climate and water resource data from the Philippines. Primarily, four sets of data representing the different rainfall classification of the country were collected, analysed, and used as input in the model. Simulations were also done on date of planting, probabilities of wet and dry period and with various capacities of the water reservoir used for supplemental irrigation. Through the analysis, useful information was obtained to determine suitable crops in the region, cropping schedule and pattern appropriate to the specific climate conditions. In addition, optimisation of the use of the land and water resources can be achieved in areas partly irrigated by small reservoirs.

  4. Cost optimisation and minimisation of the environmental impact through life cycle analysis of the waste water treatment plant of Bree (Belgium).

    PubMed

    De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B

    2011-01-01

    An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.

  5. Robustness analysis of bogie suspension components Pareto optimised values

    NASA Astrophysics Data System (ADS)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  6. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode.

  7. Analysis and optimisation of a mixed fluid cascade (MFC) process

    NASA Astrophysics Data System (ADS)

    Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng

    2017-04-01

    A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.

  8. Optimisation of flight dynamic control based on many-objectives meta-heuristic: a comparative study

    NASA Astrophysics Data System (ADS)

    Bureerat, Sujin; Pholdee, Nantiwat; Radpukdee, Thana

    2018-05-01

    Development of many objective meta-heuristics (MnMHs) is a currently interesting topic as they are suitable to real applications of optimisation problems which usually require many ob-jectives. However, most of MnMHs have been mostly developed and tested based on stand-ard testing functions while the use of MnMHs to real applications is rare. Therefore, in this work, MnMHs are applied for optimisation design of flight dynamic control. The design prob-lem is posed to find control gains for minimising; the control effort, the spiral root, the damp-ing in roll root, sideslip angle deviation, and maximising; the damping ratio of the dutch-roll complex pair, the dutch-roll frequency, bank angle at pre-specified times 1 seconds and 2.8 second subjected to several constraints based on Military Specifications (1969) requirement. Several established many-objective meta-heuristics (MnMHs) are used to solve the problem while their performances are compared. With this research work, performance of several MnMHs for flight control is investigated. The results obtained will be the baseline for future development of flight dynamic and control.

  9. Speckle-based at-wavelength metrology of X-ray mirrors with super accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal, E-mail: kawal.sawhney@diamond.ac.uk

    2016-05-15

    X-ray active mirrors, such as bimorph and mechanically bendable mirrors, are increasingly being used on beamlines at modern synchrotron source facilities to generate either focused or “tophat” beams. As well as optical tests in the metrology lab, it is becoming increasingly important to optimise and characterise active optics under actual beamline operating conditions. Recently developed X-ray speckle-based at-wavelength metrology technique has shown great potential. The technique has been established and further developed at the Diamond Light Source and is increasingly being used to optimise active mirrors. Details of the X-ray speckle-based at-wavelength metrology technique and an example of its applicabilitymore » in characterising and optimising a micro-focusing bimorph X-ray mirror are presented. Importantly, an unprecedented angular sensitivity in the range of two nanoradians for measuring the slope error of an optical surface has been demonstrated. Such a super precision metrology technique will be beneficial to the manufacturers of polished mirrors and also in optimization of beam shaping during experiments.« less

  10. A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging.

    PubMed

    Gallio, Elena; Rampado, Osvaldo; Gianaria, Elena; Bianchi, Silvio Diego; Ropolo, Roberto

    2015-01-01

    Conventional radiology is performed by means of digital detectors, with various types of technology and different performance in terms of efficiency and image quality. Following the arrival of a new digital detector in a radiology department, all the staff involved should adapt the procedure parameters to the properties of the detector, in order to achieve an optimal result in terms of correct diagnostic information and minimum radiation risks for the patient. The aim of this study was to develop and validate a software capable of simulating a digital X-ray imaging system, using graphics processing unit computing. All radiological image components were implemented in this application: an X-ray tube with primary beam, a virtual patient, noise, scatter radiation, a grid and a digital detector. Three different digital detectors (two digital radiography and a computed radiography systems) were implemented. In order to validate the software, we carried out a quantitative comparison of geometrical and anthropomorphic phantom simulated images with those acquired. In terms of average pixel values, the maximum differences were below 15%, while the noise values were in agreement with a maximum difference of 20%. The relative trends of contrast to noise ratio versus beam energy and intensity were well simulated. Total calculation times were below 3 seconds for clinical images with pixel size of actual dimensions less than 0.2 mm. The application proved to be efficient and realistic. Short calculation times and the accuracy of the results obtained make this software a useful tool for training operators and dose optimisation studies.

  11. A framework for the computer-aided planning and optimisation of manufacturing processes for components with functional graded properties

    NASA Astrophysics Data System (ADS)

    Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.

    2014-05-01

    In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.

  12. A radiobiology-based inverse treatment planning method for optimisation of permanent l-125 prostate implants in focal brachytherapy.

    PubMed

    Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A

    2016-01-07

    Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.

  13. A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems

    NASA Astrophysics Data System (ADS)

    Liu, Zuolin; Xu, Jian

    2018-04-01

    In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.

  14. Multi-objective optimization of radiotherapy: distributed Q-learning and agent-based simulation

    NASA Astrophysics Data System (ADS)

    Jalalimanesh, Ammar; Haghighi, Hamidreza Shahabi; Ahmadi, Abbas; Hejazian, Hossein; Soltani, Madjid

    2017-09-01

    Radiotherapy (RT) is among the regular techniques for the treatment of cancerous tumours. Many of cancer patients are treated by this manner. Treatment planning is the most important phase in RT and it plays a key role in therapy quality achievement. As the goal of RT is to irradiate the tumour with adequately high levels of radiation while sparing neighbouring healthy tissues as much as possible, it is a multi-objective problem naturally. In this study, we propose an agent-based model of vascular tumour growth and also effects of RT. Next, we use multi-objective distributed Q-learning algorithm to find Pareto-optimal solutions for calculating RT dynamic dose. We consider multiple objectives and each group of optimizer agents attempt to optimise one of them, iteratively. At the end of each iteration, agents compromise the solutions to shape the Pareto-front of multi-objective problem. We propose a new approach by defining three schemes of treatment planning created based on different combinations of our objectives namely invasive, conservative and moderate. In invasive scheme, we enforce killing cancer cells and pay less attention about irradiation effects on normal cells. In conservative scheme, we take more care of normal cells and try to destroy cancer cells in a less stressed manner. The moderate scheme stands in between. For implementation, each of these schemes is handled by one agent in MDQ-learning algorithm and the Pareto optimal solutions are discovered by the collaboration of agents. By applying this methodology, we could reach Pareto treatment plans through building different scenarios of tumour growth and RT. The proposed multi-objective optimisation algorithm generates robust solutions and finds the best treatment plan for different conditions.

  15. A new compound arithmetic crossover-based genetic algorithm for constrained optimisation in enterprise systems

    NASA Astrophysics Data System (ADS)

    Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu

    2017-01-01

    In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.

  16. Simulation based training in a publicly funded home birth programme in Australia: A qualitative study.

    PubMed

    Kumar, Arunaz; Nestel, Debra; Stoyles, Sally; East, Christine; Wallace, Euan M; White, Colleen

    2016-02-01

    Birth at home is a safe and appropriate choice for healthy women with a low risk pregnancy. However there is a small risk of emergencies requiring immediate, skilled management to optimise maternal and neonatal outcomes. We developed and implemented a simulation workshop designed to run in a home based setting to assist with emergency training for midwives and paramedical staff. The workshop was evaluated by assessing participants' satisfaction and response to key learning issues. Midwifery and emergency paramedical staff attending home births participated in a simulation workshop where they were required to manage birth emergencies in real time with limited availability of resources to suit the setting. They completed a pre-test and post-test evaluation form exploring the content and utility of the workshops. Content analysis was performed on qualitative data regarding the most important learning from the simulation activity. A total of 73 participants attended the workshop (midwifery=46, and paramedical=27). There were 110 comments, made by 49 participants. The most frequently identified key learning elements were related to communication (among midwives, paramedical and hospital staff and with the woman's partner), followed by recognising the role of other health care professionals, developing an understanding of the process and the importance of planning ahead. Home birth simulation workshop was found to be a useful tool by staff that provide care to women who are having a planned home birth. Developing clear communication and teamwork were found to be the key learning principles guiding their practice. Copyright © 2015 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  17. Time matters--realism in resuscitation training.

    PubMed

    Krogh, Kristian B; Høyer, Christian B; Ostergaard, Doris; Eika, Berit

    2014-08-01

    The advanced life support guidelines recommend 2min of cardiopulmonary resuscitation (CPR) and minimal hands-off time to ensure sufficient cardiac and cerebral perfusion. We have observed doctors who shorten the CPR intervals during resuscitation attempts. During simulation-based resuscitation training, the recommended 2-min CPR cycles are often deliberately decreased in order to increase the number of scenarios. The aim of this study was to test if keeping 2-min CPR cycles during resuscitation training ensures better adherence to time during resuscitation in a simulated setting. This study was designed as a randomised control trial. Fifty-four 4th-year medical students with no prior advanced resuscitation training participated in an extra-curricular one-day advanced life support course. Participants were either randomised to simulation-based training using real-time (120s) or shortened CPR cycles (30-45s instead of 120s) in the scenarios. Adherence to time was measured using the European Resuscitation Council's Cardiac Arrest Simulation Test (CASTest) in retention tests conducted one and 12 weeks after the course. The real-time group adhered significantly better to the recommended 2-min CPR cycles (time-120s) (mean 13; standard derivation (SD) 8) than the shortened CPR cycle group (mean 45; SD 19) when tested (p<0.001.) This study indicates that time is an important part of fidelity. Variables critical for performance, like adherence to time in resuscitation, should therefore be kept realistic during training to optimise outcome. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. 3D interlock design 100% PVDF piezoelectric to improve energy harvesting

    NASA Astrophysics Data System (ADS)

    Talbourdet, Anaëlle; Rault, François; Lemort, Guillaume; Cochrane, Cédric; Devaux, Eric; Campagne, Christine

    2018-07-01

    Piezoelectric textile structures based on 100% poly(vinylidene fluoride) (PVDF) were developed and characterised. Multifilaments of 246 tex were produced by melt spinning. The mechanical stretching during the process provides PVDF fibres with a piezoelectric β-phase of up to 97% has been measured by FTIR experiments. Several studies have been carried out on piezoelectric PVDF-based flexible structures (films or textiles), the aim of the study being the investigation of the differences between 2D and 3D woven fabrics from 100% optimised (by optimising piezoelectric crystalline phase) piezoelectric PVDF multifilament yarns. The textile structures were poled after the weaving process, and a maximum output voltage of 2.3 V was observed on 3D woven under compression by DMA tests. Energy harvesting is optimised in a 3D interlock thanks to the stresses of the multifilaments in the thickness. The addition of a resistor makes it possible to measure energy of 10.5 μJ.m‑2 during 10 cycles of stress in compression of 5 s each.

  19. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    NASA Astrophysics Data System (ADS)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  20. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  1. Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.

    PubMed

    García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M

    2014-12-01

    Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  3. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  4. Angle selective backscattered electron contrast in the low-voltage scanning electron microscope: Simulation and experiment for polymers.

    PubMed

    Wan, Q; Masters, R C; Lidzey, D; Abrams, K J; Dapor, M; Plenderleith, R A; Rimmer, S; Claeyssens, F; Rodenburg, C

    2016-12-01

    Recently developed detectors can deliver high resolution and high contrast images of nanostructured carbon based materials in low voltage scanning electron microscopes (LVSEM) with beam deceleration. Monte Carlo Simulations are also used to predict under which exact imaging conditions purely compositional contrast can be obtained and optimised. This allows the prediction of the electron signal intensity in angle selective conditions for back-scattered electron (BSE) imaging in LVSEM and compares it to experimental signals. Angle selective detection with a concentric back scattered (CBS) detector is considered in the model in the absence and presence of a deceleration field, respectively. The validity of the model prediction for both cases was tested experimentally for amorphous C and Cu and applied to complex nanostructured carbon based materials, namely a Poly(N-isopropylacrylamide)/Poly(ethylene glycol) Diacrylate (PNIPAM/PEGDA) semi-interpenetration network (IPN) and a Poly(3-hexylthiophene-2,5-diyl) (P3HT) film, to map nano-scale composition and crystallinity distribution by avoiding experimental imaging conditions that lead to a mixed topographical and compositional contrast. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Options for reducing food waste by quality-controlled logistics using intelligent packaging along the supply chain.

    PubMed

    Heising, Jenneke K; Claassen, G D H; Dekker, Matthijs

    2017-10-01

    Optimising supply chain management can help to reduce food waste. This paper describes how intelligent packaging can be used to reduce food waste when used in supply chain management based on quality-controlled logistics (QCL). Intelligent packaging senses compounds in the package that correlate with the critical quality attribute of a food product. The information on the quality of each individual packaged food item that is provided by the intelligent packaging can be used for QCL. In a conceptual approach it is explained that monitoring food quality by intelligent packaging sensors makes it possible to obtain information about the variation in the quality of foods and to use a dynamic expiration date (IP-DED) on a food package. The conceptual approach is supported by quantitative data from simulations on the effect of using the information of intelligent packaging in supply chain management with the goal to reduce food waste. This simulation shows that by using the information on the quality of products that is provided by intelligent packaging, QCL can substantially reduce food waste. When QCL is combined with dynamic pricing based on the predicted expiry dates, a further waste reduction is envisaged.

  6. Effects of glovebox gloves on grip and key pinch strength and contact forces for simulated manual operations with three commonly used hand tools.

    PubMed

    Sung, Peng-Cheng

    2014-01-01

    This study examined the effects of glovebox gloves for 11 females on maximum grip and key pinch strength and on contact forces generated from simulated tasks of a roller, a pair of tweezers and a crescent wrench. The independent variables were gloves fabricated of butyl, CSM/hypalon and neoprene materials; two glove thicknesses; and layers of gloves worn including single, double and triple gloving. CSM/hypalon and butyl gloves produced greater grip strength than the neoprene gloves. CSM/hypalon gloves also lowered contact forces for roller and wrench tasks. Single gloving and thin gloves improved hand strength performances. However, triple layers lowered contact forces for all tasks. Based on the evaluating results, selection and design recommendations of gloves for three hand tools were provided to minimise the effects on hand strength and optimise protection of the palmar hand in glovebox environments. To improve safety and health in the glovebox environments where gloves usage is a necessity, this study provides recommendations for selection and design of glovebox gloves for three hand tools including a roller, a pair of tweezers and a crescent wrench based on the results discovered in the experiments.

  7. Measurements and TCAD simulation of novel ATLAS planar pixel detector structures for the HL-LHC upgrade

    NASA Astrophysics Data System (ADS)

    Nellist, C.; Dinu, N.; Gkougkousis, E.; Lounis, A.

    2015-06-01

    The LHC accelerator complex will be upgraded between 2020-2022, to the High-Luminosity-LHC, to considerably increase statistics for the various physics analyses. To operate under these challenging new conditions, and maintain excellent performance in track reconstruction and vertex location, the ATLAS pixel detector must be substantially upgraded and a full replacement is expected. Processing techniques for novel pixel designs are optimised through characterisation of test structures in a clean room and also through simulations with Technology Computer Aided Design (TCAD). A method to study non-perpendicular tracks through a pixel device is discussed. Comparison of TCAD simulations with Secondary Ion Mass Spectrometry (SIMS) measurements to investigate the doping profile of structures and validate the simulation process is also presented.

  8. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  9. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  10. Catalytic CVD synthesis of boron nitride and carbon nanomaterials - synergies between experiment and theory.

    PubMed

    McLean, Ben; Eveleens, Clothilde A; Mitchell, Izaac; Webber, Grant B; Page, Alister J

    2017-10-11

    Low-dimensional carbon and boron nitride nanomaterials - hexagonal boron nitride, graphene, boron nitride nanotubes and carbon nanotubes - remain at the forefront of advanced materials research. Catalytic chemical vapour deposition has become an invaluable technique for reliably and cost-effectively synthesising these materials. In this review, we will emphasise how a synergy between experimental and theoretical methods has enhanced the understanding and optimisation of this synthetic technique. This review examines recent advances in the application of CVD to synthesising boron nitride and carbon nanomaterials and highlights where, in many cases, molecular simulations and quantum chemistry have provided key insights complementary to experimental investigation. This synergy is particularly prominent in the field of carbon nanotube and graphene CVD synthesis, and we propose here it will be the key to future advances in optimisation of CVD synthesis of boron nitride nanomaterials, boron nitride - carbon composite materials, and other nanomaterials generally.

  11. Suppressing unsteady flow in arterio-venous fistulae

    NASA Astrophysics Data System (ADS)

    Grechy, L.; Iori, F.; Corbett, R. W.; Shurey, S.; Gedroyc, W.; Duncan, N.; Caro, C. G.; Vincent, P. E.

    2017-10-01

    Arterio-Venous Fistulae (AVF) are regarded as the "gold standard" method of vascular access for patients with end-stage renal disease who require haemodialysis. However, a large proportion of AVF do not mature, and hence fail, as a result of various pathologies such as Intimal Hyperplasia (IH). Unphysiological flow patterns, including high-frequency flow unsteadiness, associated with the unnatural and often complex geometries of AVF are believed to be implicated in the development of IH. In the present study, we employ a Mesh Adaptive Direct Search optimisation framework, computational fluid dynamics simulations, and a new cost function to design a novel non-planar AVF configuration that can suppress high-frequency unsteady flow. A prototype device for holding an AVF in the optimal configuration is then fabricated, and proof-of-concept is demonstrated in a porcine model. Results constitute the first use of numerical optimisation to design a device for suppressing potentially pathological high-frequency flow unsteadiness in AVF.

  12. Thermal Performance Analysis of Solar Collectors Installed for Combisystem in the Apartment Building

    NASA Astrophysics Data System (ADS)

    Žandeckis, A.; Timma, L.; Blumberga, D.; Rochas, C.; Rošā, M.

    2012-01-01

    The paper focuses on the application of wood pellet and solar combisystem for space heating and hot water preparation at apartment buildings under the climate of Northern Europe. A pilot project has been implemented in the city of Sigulda (N 57° 09.410 E 024° 52.194), Latvia. The system was designed and optimised using TRNSYS - a dynamic simulation tool. The pilot project was continuously monitored. To the analysis the heat transfer fluid flow rate and the influence of the inlet temperature on the performance of solar collectors were subjected. The thermal performance of a solar collector loop was studied using a direct method. A multiple regression analysis was carried out using STATGRAPHICS Centurion 16.1.15 with the aim to identify the operational and weather parameters of the system which cause the strongest influence on the collector's performance. The parameters to be used for the system's optimisation have been evaluated.

  13. Kinetics in the real world: linking molecules, processes, and systems.

    PubMed

    Kohse-Höinghaus, Katharina; Troe, Jürgen; Grabow, Jens-Uwe; Olzmann, Matthias; Friedrichs, Gernot; Hungenberg, Klaus-Dieter

    2018-04-25

    Unravelling elementary steps, reaction pathways, and kinetic mechanisms is key to understanding the behaviour of many real-world chemical systems that span from the troposphere or even interstellar media to engines and process reactors. Recent work in chemical kinetics provides detailed information on the reactive changes occurring in chemical systems, often on the atomic or molecular scale. The optimisation of practical processes, for instance in combustion, catalysis, battery technology, polymerisation, and nanoparticle production, can profit from a sound knowledge of the underlying fundamental chemical kinetics. Reaction mechanisms can combine information gained from theory and experiments to enable the predictive simulation and optimisation of the crucial process variables and influences on the system's behaviour that may be exploited for both monitoring and control. Chemical kinetics, as one of the pillars of Physical Chemistry, thus contributes importantly to understanding and describing natural environments and technical processes and is becoming increasingly relevant for interactions in and with the real world.

  14. Undermining and Strengthening Social Networks through Network Modification

    PubMed Central

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-01-01

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention. PMID:27703198

  15. Undermining and Strengthening Social Networks through Network Modification.

    PubMed

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-10-05

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention.

  16. Undermining and Strengthening Social Networks through Network Modification

    NASA Astrophysics Data System (ADS)

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-10-01

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention.

  17. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  18. Intelligent Internet-based information system optimises diabetes mellitus management in communities.

    PubMed

    Wei, Xuejuan; Wu, Hao; Cui, Shuqi; Ge, Caiying; Wang, Li; Jia, Hongyan; Liang, Wannian

    2018-05-01

    To evaluate the effect of an intelligent Internet-based information system upon optimising the management of patients diagnosed with type 2 diabetes mellitus (T2DM). In 2015, a T2DM information system was introduced to optimise the management of T2DM patients for 1 year in Fangzhuang community of Beijing, China. A total of 602 T2DM patients who were registered in the health service centre of Fangzhuang community were enrolled based on an isometric sampling technique. The data from 587 patients were used in the final analysis. The intervention effect was subsequently assessed by statistically comparing multiple parameters, such as the prevalence of glycaemic control, standard health management and annual outpatient consultation visits per person, before and after the implementation of the T2DM information system. In 2015, a total of 1668 T2DM patients were newly registered in Fangzhuang community. The glycaemic control rate was calculated as 37.65% in 2014 and significantly elevated up to 62.35% in 2015 ( p < 0.001). After application of the Internet-based information system, the rate of standard health management was increased from 48.04% to 85.01% ( p < 0.001). Among all registered T2DM patients, the annual outpatient consultation visits per person in Fangzhuang community was 24.88% in 2014, considerably decreased to 22.84% in 2015 ( p < 0.001) and declined from 14.59% to 13.66% in general hospitals ( p < 0.05). Application of the T2DM information system optimised the management of T2DM patients in Fangzhuang community and decreased the outpatient numbers in both community and general hospitals, which played a positive role in assisting T2DM patients and their healthcare providers to better manage this chronic illness.

  19. Model-Free Machine Learning in Biomedicine: Feasibility Study in Type 1 Diabetes

    PubMed Central

    Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G.

    2016-01-01

    Although reinforcement learning (RL) is suitable for highly uncertain systems, the applicability of this class of algorithms to medical treatment may be limited by the patient variability which dictates individualised tuning for their usually multiple algorithmic parameters. This study explores the feasibility of RL in the framework of artificial pancreas development for type 1 diabetes (T1D). In this approach, an Actor-Critic (AC) learning algorithm is designed and developed for the optimisation of insulin infusion for personalised glucose regulation. AC optimises the daily basal insulin rate and insulin:carbohydrate ratio for each patient, on the basis of his/her measured glucose profile. Automatic, personalised tuning of AC is based on the estimation of information transfer (IT) from insulin to glucose signals. Insulin-to-glucose IT is linked to patient-specific characteristics related to total daily insulin needs and insulin sensitivity (SI). The AC algorithm is evaluated using an FDA-accepted T1D simulator on a large patient database under a complex meal protocol, meal uncertainty and diurnal SI variation. The results showed that 95.66% of time was spent in normoglycaemia in the presence of meal uncertainty and 93.02% when meal uncertainty and SI variation were simultaneously considered. The time spent in hypoglycaemia was 0.27% in both cases. The novel tuning method reduced the risk of severe hypoglycaemia, especially in patients with low SI. PMID:27441367

  20. Refinement procedure for the image alignment in high-resolution electron tomography.

    PubMed

    Houben, L; Bar Sadan, M

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    PubMed

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  2. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  3. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    PubMed

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  4. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    PubMed Central

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  5. A soft computing-based approach to optimise queuing-inventory control problem

    NASA Astrophysics Data System (ADS)

    Alaghebandha, Mohammad; Hajipour, Vahid

    2015-04-01

    In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.

  6. Discontinuous permeable adsorptive barrier design and cost analysis: a methodological approach to optimisation.

    PubMed

    Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino

    2017-09-19

    The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.

  7. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  8. Characterisation and optimisation of flexible transfer lines for liquid helium. Part I: Experimental results

    NASA Astrophysics Data System (ADS)

    Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.

    2016-04-01

    The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.

  9. A study of lateral fall-off (penumbra) optimisation for pencil beam scanning (PBS) proton therapy

    NASA Astrophysics Data System (ADS)

    Winterhalter, C.; Lomax, A.; Oxley, D.; Weber, D. C.; Safai, S.

    2018-01-01

    The lateral fall-off is crucial for sparing organs at risk in proton therapy. It is therefore of high importance to minimize the penumbra for pencil beam scanning (PBS). Three optimisation approaches are investigated: edge-collimated uniformly weighted spots (collimation), pencil beam optimisation of uncollimated pencil beams (edge-enhancement) and the optimisation of edge collimated pencil beams (collimated edge-enhancement). To deliver energies below 70 MeV, these strategies are evaluated in combination with the following pre-absorber methods: field specific fixed thickness pre-absorption (fixed), range specific, fixed thickness pre-absorption (automatic) and range specific, variable thickness pre-absorption (variable). All techniques are evaluated by Monte Carlo simulated square fields in a water tank. For a typical air gap of 10 cm, without pre-absorber collimation reduces the penumbra only for water equivalent ranges between 4-11 cm by up to 2.2 mm. The sharpest lateral fall-off is achieved through collimated edge-enhancement, which lowers the penumbra down to 2.8 mm. When using a pre-absorber, the sharpest fall-offs are obtained when combining collimated edge-enhancement with a variable pre-absorber. For edge-enhancement and large air gaps, it is crucial to minimize the amount of material in the beam. For small air gaps however, the superior phase space of higher energetic beams can be employed when more material is used. In conclusion, collimated edge-enhancement combined with the variable pre-absorber is the recommended setting to minimize the lateral penumbra for PBS. Without collimator, it would be favourable to use a variable pre-absorber for large air gaps and an automatic pre-absorber for small air gaps.

  10. Evaluation and optimisation of current milrinone prescribing for the treatment and prevention of low cardiac output syndrome in paediatric patients after open heart surgery using a physiology-based pharmacokinetic drug-disease model.

    PubMed

    Vogt, Winnie

    2014-01-01

    Milrinone is the drug of choice for the treatment and prevention of low cardiac output syndrome (LCOS) in paediatric patients after open heart surgery across Europe. Discrepancies, however, among prescribing guidance, clinical studies and practice pattern require clarification to ensure safe and effective prescribing. However, the clearance prediction equations derived from classical pharmacokinetic modelling provide limited support as they have recently failed a clinical practice evaluation. Therefore, the objective of this study was to evaluate current milrinone dosing using physiology-based pharmacokinetic (PBPK) modelling and simulation to complement the existing pharmacokinetic knowledge and propose optimised dosing regimens as a basis for improving the standard of care for paediatric patients. A PBPK drug-disease model using a population approach was developed in three steps from healthy young adults to adult patients and paediatric patients with and without LCOS after open heart surgery. Pre- and postoperative organ function values from adult and paediatric patients were collected from literature and integrated into a disease model as factorial changes from the reference values in healthy adults aged 20-40 years. The disease model was combined with the PBPK drug model and evaluated against existing pharmacokinetic data. Model robustness was assessed by parametric sensitivity analysis. In the next step, virtual patient populations were created, each with 1,000 subjects reflecting the average adult and paediatric patient characteristics with regard to age, sex, bodyweight and height. They were integrated into the PBPK drug-disease model to evaluate the effectiveness of current milrinone dosing in achieving the therapeutic target range of 100-300 ng/mL milrinone in plasma. Optimised dosing regimens were subsequently developed. The pharmacokinetics of milrinone in healthy young adults as well as adult and paediatric patients were accurately described with an average fold error of 1.1 ± 0.1 (mean ± standard deviation) and mean relative deviation of 1.5 ± 0.3 as measures of bias and precision, respectively. Normalised maximum sensitivity coefficients for model input parameters ranged from -0.84 to 0.71, which indicated model robustness. The evaluation of milrinone dosing across different paediatric age groups showed a non-linear age dependence of total plasma clearance and exposure differences of a factor 1.4 between patients with and without LCOS for a fixed dosing regimen. None of the currently used dosing regimens for milrinone achieved the therapeutic target range across all paediatric age groups and adult patients, so optimised dosing regimens were developed that considered the age-dependent and pathophysiological differences. The PBPK drug-disease model for milrinone in paediatric patients with and without LCOS after open heart surgery highlights that age, disease and surgery differently impact the pharmacokinetics of milrinone, and that current milrinone dosing for LCOS is suboptimal to maintain the therapeutic target range across the entire paediatric age range. Thus, optimised dosing strategies are proposed to ensure safe and effective prescribing.

  11. Contact stiffness considerations when simulating tyre/road noise

    NASA Astrophysics Data System (ADS)

    Winroth, Julia; Kropp, Wolfgang; Hoever, Carsten; Höstmad, Patrik

    2017-11-01

    Tyre/road simulation tools that can capture tyre vibrations, rolling resistance and noise generation are useful for understanding the complex processes that are involved and thereby promoting further development and optimisation. The most detailed tyre/road contact models use a spatial discretisation of the contact and assume an interfacial stiffness to account for the small-scale roughness within the elements. This interfacial stiffness has been found to have a significant impact on the simulated noise emissions but no thorough investigations of this sensitivity have been conducted. Three mechanisms are thought to be involved: The horn effect, the modal composition of the vibrational field of the tyre and the contact forces exciting the tyre vibrations. This study used a numerical tyre/road noise simulation tool based on physical relations to investigate these aspects. The model includes a detailed time-domain contact model with linear or non-linear contact springs that accounts for the effect of local tread deformation on smaller length scales. Results confirm that an increase in contact spring stiffness causes a significant increase of the simulated tyre/road noise. This is primarily caused by a corresponding increase in the contact forces, resulting in larger vibrational amplitudes. The horn effect and the modal composition are relatively unaffected and have minor effects on the radiated noise. A more detailed non-linear contact spring formulation with lower stiffness at small indentations results in a reduced high-frequency content in the contact forces and the simulated noise.

  12. An analytical fuzzy-based approach to ?-gain optimal control of input-affine nonlinear systems using Newton-type algorithm

    NASA Astrophysics Data System (ADS)

    Milic, Vladimir; Kasac, Josip; Novakovic, Branko

    2015-10-01

    This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.

  13. Experimental design-based isotope-dilution SPME-GC/MS method development for the analysis of smoke flavouring products.

    PubMed

    Giri, Anupam; Zelinkova, Zuzana; Wenzl, Thomas

    2017-12-01

    For the implementation of Regulation (EC) No 2065/2003 related to smoke flavourings used or intended for use in or on foods a method based on solid-phase micro extraction (SPME) GC/MS was developed for the characterisation of liquid smoke products. A statistically based experimental design (DoE) was used for method optimisation. The best general conditions to quantitatively analyse the liquid smoke compounds were obtained with a polydimethylsiloxane/divinylbenzene (PDMS/DVB) fibre, 60°C extraction temperature, 30 min extraction time, 250°C desorption temperature, 180 s desorption time, 15 s agitation time, and 250 rpm agitation speed. Under the optimised conditions, 119 wood pyrolysis products including furan/pyran derivatives, phenols, guaiacol, syringol, benzenediol, and their derivatives, cyclic ketones, and several other heterocyclic compounds were identified. The proposed method was repeatable (RSD% <5) and the calibration functions were linear for all compounds under study. Nine isotopically labelled internal standards were used for improving quantification of analytes by compensating matrix effects that might affect headspace equilibrium and extractability of compounds. The optimised isotope dilution SPME-GC/MS based analytical method proved to be fit for purpose, allowing the rapid identification and quantification of volatile compounds in liquid smoke flavourings.

  14. Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.

    PubMed

    Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K

    2012-06-01

    Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    PubMed

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  16. Estimation of transversely isotropic material properties from magnetic resonance elastography using the optimised virtual fields method.

    PubMed

    Miller, Renee; Kolipaka, Arunark; Nash, Martyn P; Young, Alistair A

    2018-03-12

    Magnetic resonance elastography (MRE) has been used to estimate isotropic myocardial stiffness. However, anisotropic stiffness estimates may give insight into structural changes that occur in the myocardium as a result of pathologies such as diastolic heart failure. The virtual fields method (VFM) has been proposed for estimating material stiffness from image data. This study applied the optimised VFM to identify transversely isotropic material properties from both simulated harmonic displacements in a left ventricular (LV) model with a fibre field measured from histology as well as isotropic phantom MRE data. Two material model formulations were implemented, estimating either 3 or 5 material properties. The 3-parameter formulation writes the transversely isotropic constitutive relation in a way that dissociates the bulk modulus from other parameters. Accurate identification of transversely isotropic material properties in the LV model was shown to be dependent on the loading condition applied, amount of Gaussian noise in the signal, and frequency of excitation. Parameter sensitivity values showed that shear moduli are less sensitive to noise than the other parameters. This preliminary investigation showed the feasibility and limitations of using the VFM to identify transversely isotropic material properties from MRE images of a phantom as well as simulated harmonic displacements in an LV geometry. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Modifying landing mat material properties may decrease peak contact forces but increase forefoot forces in gymnastics landings.

    PubMed

    Mills, Chris; Yeadon, Maurice R; Pain, Matthew T G

    2010-09-01

    This study investigated how changes in the material properties of a landing mat could minimise ground reaction forces (GRF) and internal loading on a gymnast during landing. A multi-layer model of a gymnastics competition landing mat and a subject-specific seven-link wobbling mass model of a gymnast were developed to address this aim. Landing mat properties (stiffness and damping) were optimised using a Simplex algorithm to minimise GRF and internal loading. The optimisation of the landing mat parameters was characterised by minimal changes to the mat's stiffness (<0.5%) but increased damping (272%) compared to the competition landing mat. Changes to the landing mat resulted in reduced peak vertical and horizontal GRF and reduced bone bending moments in the shank and thigh compared to a matching simulation. Peak bone bending moments within the thigh and shank were reduced by 6% from 321.5 Nm to 302.5Nm and GRF by 12% from 8626 N to 7552 N when compared to a matching simulation. The reduction in these forces may help to reduce the risk of bone fracture injury associated with a single landing and reduce the risk of a chronic injury such as a stress fracture.

  18. Use of the Inverse Approach for the Manufacture and Decoration of Food Cans

    NASA Astrophysics Data System (ADS)

    Duffett, G. A.; Forgas, A.; Neamtu, L.; Naceur, H.; Batoz, J. L.; Guo, Y. Q.

    2005-08-01

    Innovation is a key objective in the metal packaging industry in order to produce new concepts, designs, shapes and printing. Simulation technology now allows both the can design as well as the manufacturing process to be carefully analysed before any physical prototypes or dies have been manufactured. These simulations are traditionally carried out using incremental simulation methodologies. However, much information may also be attained by using the inverse approach: the initial blank format for the can body as well as its lid may be optimised much faster, the actual decoration of the can may be evaluated and even calculated when deformation printing techniques are utilised. This paper presents some of the technical details relating to the inverse approach employed in Stampack to carry out simulations important for the manufacture of food cans that are shown via industrial.

  19. Fast Simulations of Gas Sloshing and Cold Front Formation

    NASA Technical Reports Server (NTRS)

    Roediger, E.; ZuHone, J. A.

    2011-01-01

    We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artefacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artefacts.

  20. Fast Simulations of Gas Sloshing and Cold Front Formation

    NASA Technical Reports Server (NTRS)

    Roediger, E.; ZuHone, J. A.

    2012-01-01

    We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artifacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artifacts.

  1. Modelling and strategy optimisation for a kind of networked evolutionary games with memories under the bankruptcy mechanism

    NASA Astrophysics Data System (ADS)

    Fu, Shihua; Li, Haitao; Zhao, Guodong

    2018-05-01

    This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.

  2. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  3. Optimisation of logistics processes of energy grass collection

    NASA Astrophysics Data System (ADS)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu

  4. Non-fragile observer-based output feedback control for polytopic uncertain system under distributed model predictive control approach

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun

    2017-07-01

    In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.

  5. Preoptimised VB: a fast method for the ground and excited states of ionic clusters I. Localised preoptimisation for (ArCO) +, (ArN 2) + and N 4+

    NASA Astrophysics Data System (ADS)

    Langenberg, J. H.; Bucur, I. B.; Archirel, P.

    1997-09-01

    We show that in the simple case of van der Waals ionic clusters, the optimisation of orbitals within VB can be easily simulated with the help of pseudopotentials. The procedure yields the ground and the first excited states of the cluster simultaneously. This makes the calculation of potential energy surfaces for tri- and tetraatomic clusters possible, with very acceptable computation times. We give potential curves for (ArCO) +, (ArN 2) + and N 4+. An application to the simulation of the SCF method is shown for Na +H 2O.

  6. WinTRAX: A raytracing software package for the design of multipole focusing systems

    NASA Astrophysics Data System (ADS)

    Grime, G. W.

    2013-07-01

    The software package TRAX was a simulation tool for modelling the path of charged particles through linear cylindrical multipole fields described by analytical expressions and was a development of the earlier OXRAY program (Grime and Watt, 1983; Grime et al., 1982) [1,2]. In a 2005 comparison of raytracing software packages (Incerti et al., 2005) [3], TRAX/OXRAY was compared with Geant4 and Zgoubi and was found to give close agreement with the more modern codes. TRAX was a text-based program which was only available for operation in a now rare VMS workstation environment, so a new program, WinTRAX, has been developed for the Windows operating system. This implements the same basic computing strategy as TRAX, and key sections of the code are direct translations from FORTRAN to C++, but the Windows environment is exploited to make an intuitive graphical user interface which simplifies and enhances many operations including system definition and storage, optimisation, beam simulation (including with misaligned elements) and aberration coefficient determination. This paper describes the program and presents comparisons with other software and real installations.

  7. Evaluation and optimization of footwear comfort parameters using finite element analysis and a discrete optimization algorithm

    NASA Astrophysics Data System (ADS)

    Papagiannis, P.; Azariadis, P.; Papanikos, P.

    2017-10-01

    Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.

  8. Delving into sensible measures to enhance the environmental performance of biohydrogen: A quantitative approach based on process simulation, life cycle assessment and data envelopment analysis.

    PubMed

    Martín-Gamboa, Mario; Iribarren, Diego; Susmozas, Ana; Dufour, Javier

    2016-08-01

    A novel approach is developed to evaluate quantitatively the influence of operational inefficiency in biomass production on the life-cycle performance of hydrogen from biomass gasification. Vine-growers and process simulation are used as key sources of inventory data. The life cycle assessment of biohydrogen according to current agricultural practices for biomass production is performed, as well as that of target biohydrogen according to agricultural practices optimised through data envelopment analysis. Only 20% of the vineyards assessed operate efficiently, and the benchmarked reduction percentages of operational inputs range from 45% to 73% in the average vineyard. The fulfilment of operational benchmarks avoiding irregular agricultural practices is concluded to improve significantly the environmental profile of biohydrogen (e.g., impact reductions above 40% for eco-toxicity and global warming). Finally, it is shown that this type of bioenergy system can be an excellent replacement for conventional hydrogen in terms of global warming and non-renewable energy demand. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management

    PubMed Central

    Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo

    2017-01-01

    Abstract Background and Aims The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012–2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. Methods During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. Results The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Conclusions Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. PMID:28981622

  10. Optimisation of simulated team training through the application of learning theories: a debate for a conceptual framework.

    PubMed

    Stocker, Martin; Burmester, Margarita; Allen, Meredith

    2014-04-03

    As a conceptual review, this paper will debate relevant learning theories to inform the development, design and delivery of an effective educational programme for simulated team training relevant to health professionals. Kolb's experiential learning theory is used as the main conceptual framework to define the sequence of activities. Dewey's theory of reflective thought and action, Jarvis modification of Kolb's learning cycle and Schön's reflection-on-action serve as a model to design scenarios for optimal concrete experience and debriefing for challenging participants' beliefs and habits. Bandura's theory of self-efficacy and newer socio-cultural learning models outline that for efficient team training, it is mandatory to introduce the social-cultural context of a team. The ideal simulated team training programme needs a scenario for concrete experience, followed by a debriefing with a critical reflexive observation and abstract conceptualisation phase, and ending with a second scenario for active experimentation. Let them re-experiment to optimise the effect of a simulated training session. Challenge them to the edge: The scenario needs to challenge participants to generate failures and feelings of inadequacy to drive and motivate team members to critical reflect and learn. Not experience itself but the inadequacy and contradictions of habitual experience serve as basis for reflection. Facilitate critical reflection: Facilitators and group members must guide and motivate individual participants through the debriefing session, inciting and empowering learners to challenge their own beliefs and habits. To do this, learners need to feel psychological safe. Let the group talk and critical explore. Motivate with reality and context: Training with multidisciplinary team members, with different levels of expertise, acting in their usual environment (in-situ simulation) on physiological variables is mandatory to introduce cultural context and social conditions to the learning experience. Embedding in situ team training sessions into a teaching programme to enable repeated training and to assess regularly team performance is mandatory for a cultural change of sustained improvement of team performance and patient safety.

  11. Game Theory of Mind

    PubMed Central

    Yoshida, Wako; Dolan, Ray J.; Friston, Karl J.

    2008-01-01

    This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution. PMID:19112488

  12. Moss and peat hydraulic properties are optimized to maximise peatland water use efficiency

    NASA Astrophysics Data System (ADS)

    Kettridge, Nicholas; Tilak, Amey; Devito, Kevin; Petrone, Rich; Mendoza, Carl; Waddington, Mike

    2016-04-01

    Peatland ecosystems are globally important carbon and terrestrial surface water stores that have formed over millennia. These ecosystems have likely optimised their ecohydrological function over the long-term development of their soil hydraulic properties. Through a theoretical ecosystem approach, applying hydrological modelling integrated with known ecological thresholds and concepts, the optimisation of peat hydraulic properties is examined to determine which of the following conditions peatland ecosystems target during this development: i) maximise carbon accumulation, ii) maximise water storage, or iii) balance carbon profit across hydrological disturbances. Saturated hydraulic conductivity (Ks) and empirical van Genuchten water retention parameter α are shown to provide a first order control on simulated water tensions. Across parameter space, peat profiles with hypothetical combinations of Ks and α show a strong binary tendency towards targeting either water or carbon storage. Actual hydraulic properties from five northern peatlands fall at the interface between these goals, balancing the competing demands of carbon accumulation and water storage. We argue that peat hydraulic properties are thus optimized to maximise water use efficiency and that this optimisation occurs over a centennial to millennial timescale as the peatland develops. This provides a new conceptual framework to characterise peat hydraulic properties across climate zones and between a range of different disturbances, and which can be used to provide benchmarks for peatland design and reclamation.

  13. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  14. Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0

    NASA Astrophysics Data System (ADS)

    Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; Luke, Catherine M.

    2016-08-01

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.

  15. Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting

    NASA Astrophysics Data System (ADS)

    Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy

    2018-06-01

    Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.

  16. Data-driven nonlinear optimisation of a simple air pollution dispersion model generating high resolution spatiotemporal exposure

    NASA Astrophysics Data System (ADS)

    Yuval; Bekhor, Shlomo; Broday, David M.

    2013-11-01

    Spatially detailed estimation of exposure to air pollutants in the urban environment is needed for many air pollution epidemiological studies. To benefit studies of acute effects of air pollution such exposure maps are required at high temporal resolution. This study introduces nonlinear optimisation framework that produces high resolution spatiotemporal exposure maps. An extensive traffic model output, serving as proxy for traffic emissions, is fitted via a nonlinear model embodying basic dispersion properties, to high temporal resolution routine observations of traffic-related air pollutant. An optimisation problem is formulated and solved at each time point to recover the unknown model parameters. These parameters are then used to produce a detailed concentration map of the pollutant for the whole area covered by the traffic model. Repeating the process for multiple time points results in the spatiotemporal concentration field. The exposure at any location and for any span of time can then be computed by temporal integration of the concentration time series at selected receptor locations for the durations of desired periods. The methodology is demonstrated for NO2 exposure using the output of a traffic model for the greater Tel Aviv area, Israel, and the half-hourly monitoring and meteorological data from the local air quality network. A leave-one-out cross-validation resulted in simulated half-hourly concentrations that are almost unbiased compared to the observations, with a mean error (ME) of 5.2 ppb, normalised mean error (NME) of 32%, 78% of the simulated values are within a factor of two (FAC2) of the observations, and the coefficient of determination (R2) is 0.6. The whole study period integrated exposure estimations are also unbiased compared with their corresponding observations, with ME of 2.5 ppb, NME of 18%, FAC2 of 100% and R2 that equals 0.62.

  17. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  18. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    PubMed

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  19. Subject Specific Optimisation of the Stiffness of Footwear Material for Maximum Plantar Pressure Reduction.

    PubMed

    Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan

    2017-08-01

    Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.

  20. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  1. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  2. A new effective operator for the hybrid algorithm for solving global optimisation problems

    NASA Astrophysics Data System (ADS)

    Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac

    2018-04-01

    Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.

  3. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  4. Optimization of Phase Change Memory with Thin Metal Inserted Layer on Material Properties

    NASA Astrophysics Data System (ADS)

    Harnsoongnoen, Sanchai; Sa-Ngiamsak, Chiranut; Siritaratiwat, Apirat

    This works reports, for the first time, the thorough study and optimisation of Phase Change Memory (PCM) structure with thin metal inserted chalcogenide via electrical resistivity (ρ) using finite element modeling. PCM is one of the best candidates for next generation non-volatile memory. It has received much attention recently due to its fast write speed, non-destructive readout, superb scalability, and great compatibility with current silicon-based mass fabrication. The setback of PCM is a high reset current typically higher than 1mA based on 180nm lithography. To reduce the reset current and to solve the over-programming failure, PCM with thin metal inserted chalcogenide (bottom chalcogenide/metal inserted/top chalcogenide) structure has been proposed. Nevertheless, reports on optimisation of the electrical resistivity using the finite element method for this new PCM structure have never been published. This work aims to minimize the reset current of this PCM structure by optimizing the level of the electrical resistivity of the PCM profile using the finite element approach. This work clearly shows that PCM characteristics are strongly affected by the electrical resistivity. The 2-D simulation results reveal clearly that the best thermal transfer of and self-joule-heating at the bottom chalcogenide layer can be achieved under conditions; ρ_bottom chalcogenide > ρ_metal inserted > ρ_top chalcogenide More specifically, the optimized electrical resistivity of PCMTMI is attained with ρ_top chalcogenide: ρ_metal inserted: ρ_bottom chalcogenide ratio of 1:6:16 when ρ_top chalcogenide is 10-3 Ωm. In conclusion, high energy efficiency can be obtained with the reset current as low as 0.3mA and with high speed operation of less than 30ns.

  5. A TPS kernel for calculating survival vs. depth: distributions in a carbon radiotherapy beam, based on Katz's cellular Track Structure Theory.

    PubMed

    Waligórski, M P R; Grzanka, L; Korcyl, M; Olko, P

    2015-09-01

    An algorithm was developed of a treatment planning system (TPS) kernel for carbon radiotherapy in which Katz's Track Structure Theory of cellular survival (TST) is applied as its radiobiology component. The physical beam model is based on available tabularised data, prepared by Monte Carlo simulations of a set of pristine carbon beams of different input energies. An optimisation tool developed for this purpose is used to find the composition of pristine carbon beams of input energies and fluences which delivers a pre-selected depth-dose distribution profile over the spread-out Bragg peak (SOBP) region. Using an extrapolation algorithm, energy-fluence spectra of the primary carbon ions and of all their secondary fragments are obtained over regular steps of beam depths. To obtain survival vs. depth distributions, the TST calculation is applied to the energy-fluence spectra of the mixed field of primary ions and of their secondary products at the given beam depths. Katz's TST offers a unique analytical and quantitative prediction of cell survival in such mixed ion fields. By optimising the pristine beam composition to a published depth-dose profile over the SOBP region of a carbon beam and using TST model parameters representing the survival of CHO (Chinese Hamster Ovary) cells in vitro, it was possible to satisfactorily reproduce a published data set of CHO cell survival vs. depth measurements after carbon ion irradiation. The authors also show by a TST calculation that 'biological dose' is neither linear nor additive. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Instant polysaccharide-based emulsions: impact of microstructure on lipolysis.

    PubMed

    Torcello-Gómez, Amelia; Foster, Timothy J

    2017-06-21

    The development of emulsion-based products through optimisation of ingredients, reduction in energy-input during manufacture, while fulfilling healthy attributes, are major objectives within the food industry. Instant emulsions can meet these features, but comprehensive studies are necessary to investigate the effect of the initial formulation on the final microstructure and, in turn, on the in vitro lipolysis, comprising the double aim of this work. The instant emulsion is formed within 1.5-3 min after pouring the aqueous phase into the oil phase which contains a mixture of emulsifier (Tween 20), swelling particles (Sephadex) and thickeners (hydroxypropylmethylcellulose, HPMC, and guar gum, GG) under mild shearing (180 rpm). The creation of oil-in-water emulsions is monitored in situ by viscosity analysis, the final microstructure visualised by microscopy and the release of free fatty acids under simulated intestinal conditions quantified by titration. Increasing the concentration and molecular weight (M w ) of GG leads to smaller emulsion droplets due to increased bulk viscosity upon shearing. This droplet size reduction is magnified when increasing the M w of HPMC or swelling capacity of viscosifying particles. In addition, in the absence of the emulsifier Tween 20, the sole use of high-Mw HPMC is effective in emulsification due to combined increased bulk viscosity and interfacial activity. Hence, optimisation of the ingredient choice and usage level is possible when designing microstructures. Finally, emulsions with larger droplet size (>20 μm) display a slower rate and lower extent of lipolysis, while finer emulsions (droplet size ≤20 μm) exhibit maximum rate and extent profiles. This correlates with the extent of emulsion destabilisation observed under intestinal conditions.

  7. Fast and fuzzy multi-objective radiotherapy treatment plan generation for head and neck cancer patients with the lexicographic reference point method (LRPM)

    NASA Astrophysics Data System (ADS)

    van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan

    2017-06-01

    Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.

  8. Optimisation of solar synoptic observations

    NASA Astrophysics Data System (ADS)

    Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal

    2012-09-01

    The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.

  9. Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain

    NASA Astrophysics Data System (ADS)

    Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida

    2013-04-01

    Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.

  10. The QCDOC Project

    NASA Astrophysics Data System (ADS)

    Boyle, P.; Chen, D.; Christ, N.; Clark, M.; Cohen, S.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Li, S.; Lin, H.; Mawhinney, R.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2005-03-01

    The QCDOC project has developed a supercomputer optimised for the needs of Lattice QCD simulations. It provides a very competitive price to sustained performance ratio of around $1 USD per sustained Megaflop/s in combination with outstanding scalability. Thus very large systems delivering over 5 TFlop/s of performance on the evolution of a single lattice is possible. Large prototypes have been built and are functioning correctly. The software environment raises the state of the art in such custom supercomputers. It is based on a lean custom node operating system that eliminates many unnecessary overheads that plague other systems. Despite the custom nature, the operating system implements a standards compliant UNIX-like programming environment easing the porting of software from other systems. The SciDAC QMP interface adds internode communication in a fashion that provides a uniform cross-platform programming environment.

  11. The generalised isodamping approach for robust fractional PID controllers design

    NASA Astrophysics Data System (ADS)

    Beschi, M.; Padula, F.; Visioli, A.

    2017-06-01

    In this paper, we present a novel methodology to design fractional-order proportional-integral-derivative controllers. Based on the description of the controlled system by means of a family of linear models parameterised with respect to a free variable that describes the real process operating point, we design the controller by solving a constrained min-max optimisation problem where the maximum sensitivity has to be minimised. Among the imposed constraints, the most important one is the new generalised isodamping condition, that defines the invariancy of the phase margin with respect to the free parameter variations. It is also shown that the well-known classical isodamping condition is a special case of the new technique proposed in this paper. Simulation results show the effectiveness of the proposed technique and the superiority of the fractional-order controller compared to its integer counterpart.

  12. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  13. Development and optimisation by means of sensory analysis of new beverages based on different fruit juices and sherry wine vinegar.

    PubMed

    Cejudo-Bastante, María Jesús; Rodríguez Dodero, M Carmen; Durán Guerrero, Enrique; Castro Mejías, Remedios; Natera Marín, Ramón; García Barroso, Carmelo

    2013-03-15

    Despite the long history of sherry wine vinegar, new alternatives of consumption are being developed, with the aim of diversifying its market. Several new acetic-based fruit juices have been developed by optimising the amount of sherry wine vinegar added to different fruit juices: apple, peach, orange and pineapple. Once the concentrations of wine vinegar were optimised by an expert panel, the aforementioned new acetic fruit juices were tasted by 86 consumers. Three different aspects were taken into account: habits of consumption of vinegar and fruit juices, gender and age. Based on the sensory analysis, 50 g kg(-1) of wine vinegar was the optimal and preferred amount of wine vinegar added to the apple, orange and peach juices, whereas 10 g kg(-1) was the favourite for the pineapple fruit. Based on the olfactory and gustatory impression, and 'purchase intent', the acetic beverages made from peach and pineapple juices were the most appreciated, followed by apple juice, while those obtained from orange juice were the least preferred by consumers. New opportunities for diversification of the oenological market could be possible as a result of the development of this type of new product which can be easily developed by any vinegar or fruit juice maker company. © 2012 Society of Chemical Industry.

  14. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  15. Pre-operative optimisation of lung function

    PubMed Central

    Azhar, Naheed

    2015-01-01

    The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function. PMID:26556913

  16. On the optimisation of the use of 3He in radiation portal monitors

    NASA Astrophysics Data System (ADS)

    Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet

    2013-02-01

    Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.

  17. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  18. Determination of volatile monophenols in beer using acetylation and headspace solid-phase microextraction in combination with gas chromatography and mass spectrometry.

    PubMed

    Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R

    2010-08-31

    Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.

  19. Dwell time-based stabilisation of switched delay systems using free-weighting matrices

    NASA Astrophysics Data System (ADS)

    Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay

    2018-01-01

    In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.

  20. Simulation in paediatric urology and surgery. Part 1: An overview of educational theory.

    PubMed

    Nataraja, Ramesh M; Webb, Nathalie; Lopez, Pedro-Jose

    2018-03-01

    Surgical training has changed radically in the last few decades. The traditional Halstedian model of time-bound apprenticeship has been replaced with competency-based training. Advanced understanding of mastery learning principles has vastly altered educational methodology in surgical training, in terms of instructional design, delivery of educational content, assessment of learning, and programmatic evaluation. As part of this educational revolution, fundamentals of simulation-based education have been adopted into all levels and aspects of surgical training, requiring an understanding of concepts of fidelity and realism and the impact they have on learning. There are many educational principles and theories that can help clinical teachers understand the way that their trainees learn. In the acquisition of surgical expertise, concepts of mastery learning, deliberate practice, and experiential learning are particularly important. Furthermore, surgical teachers need to understand the principles of effective feedback, which is essential to all forms of skills learning. This article, the first of two papers, presents an overview of relevant learning theory for the busy paediatric surgeon and urologist. Seeking to introduce the concepts underpinning current changes in surgical education and training, providing practical tips to optimise teaching endeavours. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  1. Simulation of longitudinal dynamics of long freight trains in positioning operations

    NASA Astrophysics Data System (ADS)

    Qi, Zhaohui; Huang, Zhihao; Kong, Xianchao

    2012-09-01

    Positioning operations are performed in a railway goods yard, in which the freight train is pulled precisely at a specific point by a positioner. The positioner moves strictly according to the predesigned speed and provides all the traction and braking forces which are highly dependent on the longitudinal dynamic response. In order to improve the efficiency and protect the wagons from damage during positioning operations, the design speed of the positioner has to be optimised based on the simulation of longitudinal train dynamics. However, traditional models of longitudinal train dynamics are not accurate enough in some aspects. In this study, we make some changes in the traditional theory to make it suitable for the study of long freight trains in positioning operations. In the proposed method, instead of the traction force on the train, the motion of the positioner is assumed to be known; more importantly, the traditional draft gear model with nonlinear spring and linear damping is replaced by a more detailed model based on the achievement of contact and impact mechanics; the switching effects of the resistance and the coupler slack are also taken into consideration. Numerical examples that deal with positioning operations on the straight lines, slope lines and curving lines are given.

  2. Incorporation of perception-based information in robot learning using fuzzy reinforcement learning agents

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo

    2002-04-01

    Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.

  3. Coagulation kinetics beyond mean field theory using an optimised Poisson representation.

    PubMed

    Burnett, James; Ford, Ian J

    2015-05-21

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  4. Developpement d'une commande pour une hydrolienne de riviere et optimisation =

    NASA Astrophysics Data System (ADS)

    Tetrault, Philippe

    Suivant le developpement des energies renouvelables, la presente etude se veut une base theorique quant aux principes fondamentaux necessaires au bon fonctionnement et a l'implementation d'une hydrolienne de riviere. La problematique derriere ce nouveau type d'appareil est d'abord presentee. La machine electrique utilisee dans l'application, c'est-a-dire la machine synchrone a aimants permanents, est etudiee : ses equations dynamiques mecaniques et electriques sont developpees, introduisant en meme temps le concept de referentiel tournant. Le fonctionnement de l'onduleur utilise, soit un montage en pont complet a deux niveaux a semi-conducteurs, est explique et mit en equation pour permettre de comprendre les strategies de modulation disponibles. Un bref historique de ces strategies est fait avant de mettre l'emphase sur la modulation vectorielle qui sera celle utilisee pour l'application en cours. Les differents modules sont assembles dans une simulation Matlab pour confirmer leur bon fonctionnement et comparer les resultats de la simulation avec les calculs theoriques. Differents algorithmes permettant de traquer et maintenir un point de fonctionnement optimal sont presentes. Le comportement de la riviere est etudie afin d'evaluer l'ampleur des perturbations que le systeme devra gerer. Finalement, une nouvelle approche est presentee et comparee a une strategie plus conservatrice a l'aide d'un autre modele de simulation Matlab.

  5. A DNS Investigation of Non-Newtonian Turbulent Open Channel Flow

    NASA Astrophysics Data System (ADS)

    Guang, Raymond; Rudman, Murray; Chryss, Andrew; Slatter, Paul; Bhattacharya, Sati

    2010-06-01

    The flow of non-Newtonian fluids in open channels has great significance in many industrial settings from water treatment to mine waste disposal. The turbulent behaviour during transportation of these materials is of interest for many reasons, one of which is keeping settleable particles in suspension. The mechanism governing particle transport in turbulent flow has been studied in the past, but is not well understood. A better understanding of the mechanism operating in the turbulent flow of non-Newtonian suspensions in open channel would lead to improved design of many of the systems used in the mining and mineral processing industries. The objective of this paper is to introduce our work on the Direct Numerical Simulation of turbulent flow of non-Newtonian fluids in an open channel. The numerical method is based on spectral element/Fourier formulation. The flow simulation of a Herschel-Bulkley fluid agrees qualitatively with experimental results. The simulation results over-predict the flow velocity by approximately 15% for the cases considered, although the source of the discrepancy is difficult to ascertain. The effect of variation in yield stress and assumed flow depth are investigated and used to assess the sensitivity of the flow to these physical parameters. This methodology is seen to be useful in designing and optimising the transport of slurries in open channels.

  6. Polarizable atomic multipole-based force field for DOPC and POPE membrane lipids

    NASA Astrophysics Data System (ADS)

    Chu, Huiying; Peng, Xiangda; Li, Yan; Zhang, Yuebin; Min, Hanyi; Li, Guohui

    2018-04-01

    A polarizable atomic multipole-based force field for the membrane bilayer models 1,2-dioleoyl-phosphocholine (DOPC) and 1-palmitoyl-2-oleoyl-phosphatidylethanolamine (POPE) has been developed. The force field adopts the same framework as the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) model, in which the charge distribution of each atom is represented by the permanent atomic monopole, dipole and quadrupole moments. Many-body polarization including the inter- and intra-molecular polarization is modelled in a consistent manner with distributed atomic polarizabilities. The van der Waals parameters were first transferred from existing AMOEBA parameters for small organic molecules and then optimised by fitting to ab initio intermolecular interaction energies between models and a water molecule. Molecular dynamics simulations of the two aqueous DOPC and POPE membrane bilayer systems, consisting of 72 model molecules, were then carried out to validate the force field parameters. Membrane width, area per lipid, volume per lipid, deuterium order parameters, electron density profile, etc. were consistent with experimental values.

  7. Reference governors for controlled belt restraint systems

    NASA Astrophysics Data System (ADS)

    van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.

    2010-07-01

    Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.

  8. Localisation of an Unknown Number of Land Mines Using a Network of Vapour Detectors

    PubMed Central

    Chhadé, Hiba Haj; Abdallah, Fahed; Mougharbel, Imad; Gning, Amadou; Julier, Simon; Mihaylova, Lyudmila

    2014-01-01

    We consider the problem of localising an unknown number of land mines using concentration information provided by a wireless sensor network. A number of vapour sensors/detectors, deployed in the region of interest, are able to detect the concentration of the explosive vapours, emanating from buried land mines. The collected data is communicated to a fusion centre. Using a model for the transport of the explosive chemicals in the air, we determine the unknown number of sources using a Principal Component Analysis (PCA)-based technique. We also formulate the inverse problem of determining the positions and emission rates of the land mines using concentration measurements provided by the wireless sensor network. We present a solution for this problem based on a probabilistic Bayesian technique using a Markov chain Monte Carlo sampling scheme, and we compare it to the least squares optimisation approach. Experiments conducted on simulated data show the effectiveness of the proposed approach. PMID:25384008

  9. Study on optimal configuration of the grid-connected wind-solar-battery hybrid power system

    NASA Astrophysics Data System (ADS)

    Ma, Gang; Xu, Guchao; Ju, Rong; Wu, Tiantian

    2017-08-01

    The capacity allocation of each energy unit in the grid-connected wind-solar-battery hybrid power system is a significant segment in system design. In this paper, taking power grid dispatching into account, the research priorities are as follows: (1) We establish the mathematic models of each energy unit in the hybrid power system. (2) Based on dispatching of the power grid, energy surplus rate, system energy volatility and total cost, we establish the evaluation system for the wind-solar-battery power system and use a number of different devices as the constraint condition. (3) Based on an improved Genetic algorithm, we put forward a multi-objective optimisation algorithm to solve the optimal configuration problem in the hybrid power system, so we can achieve the high efficiency and economy of the grid-connected hybrid power system. The simulation result shows that the grid-connected wind-solar-battery hybrid power system has a higher comprehensive performance; the method of optimal configuration in this paper is useful and reasonable.

  10. Using the Person-Based Approach to optimise a digital intervention for the management of hypertension

    PubMed Central

    Morton, Katherine; Band, Rebecca; van Woezik, Anne; Grist, Rebecca; McManus, Richard J.; Little, Paul; Yardley, Lucy

    2018-01-01

    Background For behaviour-change interventions to be successful they must be acceptable to users and overcome barriers to behaviour change. The Person-Based Approach can help to optimise interventions to maximise acceptability and engagement. This article presents a novel, efficient and systematic method that can be used as part of the Person-Based Approach to rapidly analyse data from development studies to inform intervention modifications. We describe how we used this approach to optimise a digital intervention for patients with hypertension (HOME BP), which aims to implement medication and lifestyle changes to optimise blood pressure control. Methods In study 1, hypertensive patients (N = 12) each participated in three think-aloud interviews, providing feedback on a prototype of HOME BP. In study 2 patients (N = 11) used HOME BP for three weeks and were then interviewed about their experiences. Studies 1 and 2 were used to identify detailed changes to the intervention content and potential barriers to engagement with HOME BP. In study 3 (N = 7) we interviewed hypertensive patients who were not interested in using an intervention like HOME BP to identify potential barriers to uptake, which informed modifications to our recruitment materials. Analysis in all three studies involved detailed tabulation of patient data and comparison to our modification criteria. Results Studies 1 and 2 indicated that the HOME BP procedures were generally viewed as acceptable and feasible, but also highlighted concerns about monitoring blood pressure correctly at home and making medication changes remotely. Patients in study 3 had additional concerns about the safety and security of the intervention. Modifications improved the acceptability of the intervention and recruitment materials. Conclusions This paper provides a detailed illustration of how to use the Person-Based Approach to refine a digital intervention for hypertension. The novel, efficient approach to analysis and criteria for deciding when to implement intervention modifications described here may be useful to others developing interventions. PMID:29723262

  11. An approach to optimised control of HVAC systems in indoor swimming pools

    NASA Astrophysics Data System (ADS)

    Ribeiro, Eliseu M. A.; Jorge, Humberto M. M.; Quintela, Divo A. A.

    2016-04-01

    Indoor swimming pools are recognised as having a high level of energy consumption and present a great potential for energy saving. The energy is spent in several ways such as evaporation heat loss from the pool, high rates of ventilation required to guarantee the indoor air quality, and ambient temperatures with expressive values (typically 28-30°C) required to maintain conditions of comfort. This paper presents an approach to optimising control of heat ventilation and air conditioning systems that could be implemented in a building energy management system. It is easily adapted to any kind of pool and results in significant energy consumption reduction. The development and validation of the control model were carried out with a building thermal simulation software. The use of this control model in the case study building could reduce the energy efficiency index by 7.14 points (7.4% of total) which adds up to an energy cost saving of 15,609€ (7.5% of total).

  12. Optimization of Process Parameters for High Efficiency Laser Forming of Advanced High Strength Steels within Metallurgical Constraints

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, Ghazal; Griffiths, Jonathan; Dearden, Geoff; Edwardson, Stuart P.

    Laser forming (LF) has been shown to be a viable alternative to form automotive grade advanced high strength steels (AHSS). Due to their high strength, heat sensitivity and low conventional formability show early fractures, larger springback, batch-to-batch inconsistency and high tool wear. In this paper, optimisation of the LF process parameters has been conducted to further understand the impact of a surface heat treatment on DP1000. A FE numerical simulation has been developed to analyse the dynamic thermo-mechanical effects. This has been verified against empirical data. The goal of the optimisation has been to develop a usable process window for the LF of AHSS within strict metallurgical constraints. Results indicate it is possible to LF this material, however a complex relationship has been found between the generation and maintenance of hardness values in the heated zone. A laser surface hardening effect has been observed that could be beneficial to the efficiency of the process.

  13. Production, optimisation and characterisation of angiotensin converting enzyme inhibitory peptides from sea cucumber (Stichopus japonicus) gonad.

    PubMed

    Zhong, Chan; Sun, Le-Chang; Yan, Long-Jie; Lin, Yi-Chen; Liu, Guang-Ming; Cao, Min-Jie

    2018-01-24

    In this study, production of bioactive peptides with angiotensin converting enzyme (ACE) inhibitory activity from sea cucumber (Stichopus japonicus) gonad using commercial protamex was optimised by response surface methodology (RSM). As a result, the optimal condition to achieve the highest ACE inhibitory activity in sea cucumber gonad hydrolysate (SCGH) was hydrolysis for 1.95 h and E/S of 0.75%. For further characterisation, three individual peptides (EIYR, LF and NAPHMR) were purified and identified. The peptide NAPHMR showed the highest ACE inhibitory activity with IC 50 of 260.22 ± 3.71 μM. NAPHMR was stable against simulated gastrointestinal digestion and revealed no significant cytotoxicity toward Caco-2 cells. Molecular docking study suggested that Arg, His and Asn residues in NAPHMR interact with the S2 pocket or Zn 2+ binding motifs of ACE via hydrogen or π-bonds, potentially contributing to ACE inhibitory effect. Sea cucumber gonad is thus a potential resource to produce ACE inhibitory peptides for preparation of functional foods.

  14. Optimising the neutron environment of Radiation Portal Monitors: A computational study

    NASA Astrophysics Data System (ADS)

    Gilbert, Mark R.; Ghani, Zamir; McMillan, John E.; Packer, Lee W.

    2015-09-01

    Efficient and reliable detection of radiological or nuclear threats is a crucial part of national and international efforts to prevent terrorist activities. Radiation Portal Monitors (RPMs), which are deployed worldwide, are intended to interdict smuggled fissile material by detecting emissions of neutrons and gamma rays. However, considering the range and variety of threat sources, vehicular and shielding scenarios, and that only a small signature is present, it is important that the design of the RPMs allows these signatures to be accurately differentiated from the environmental background. Using Monte-Carlo neutron-transport simulations of a model 3He detector system we have conducted a parameter study to identify the optimum combination of detector shielding, moderation, and collimation that maximises the sensitivity of neutron-sensitive RPMs. These structures, which could be simply and cost-effectively added to existing RPMs, can improve the detector response by more than a factor of two relative to an unmodified, bare design. Furthermore, optimisation of the air gap surrounding the helium tubes also improves detector efficiency.

  15. Detection of bremsstrahlung radiation of 90Sr-90Y for emergency lung counting.

    PubMed

    Ho, A; Hakmana Witharana, S S; Jonkmans, G; Li, L; Surette, R A; Dubeau, J; Dai, X

    2012-09-01

    This study explores the possibility of developing a field-deployable (90)Sr detector for rapid lung counting in emergency situations. The detection of beta-emitters (90)Sr and its daughter (90)Y inside the human lung via bremsstrahlung radiation was performed using a 3″ × 3″ NaI(Tl) crystal detector and a polyethylene-encapsulated source to emulate human lung tissue. The simulation results show that this method is a viable technique for detecting (90)Sr with a minimum detectable activity (MDA) of 1.07 × 10(4) Bq, using a realistic dual-shielded detector system in a 0.25-µGy h(-1) background field for a 100-s scan. The MDA is sufficiently sensitive to meet the requirement for emergency lung counting of Type S (90)Sr intake. The experimental data were verified using Monte Carlo calculations, including an estimate for internal bremsstrahlung, and an optimisation of the detector geometry was performed. Optimisations in background reduction techniques and in the electronic acquisition systems are suggested.

  16. Low power test architecture for dynamic read destructive fault detection in SRAM

    NASA Astrophysics Data System (ADS)

    Takher, Vikram Singh; Choudhary, Rahul Raj

    2018-06-01

    Dynamic Read Destructive Fault (dRDF) is the outcome of resistive open defects in the core cells of static random-access memories (SRAMs). The sensitisation of dRDF involves either performing multiple read operations or creation of number of read equivalent stress (RES), on the core cell under test. Though the creation of RES is preferred over the performing multiple read operation on the core cell, cell dissipates more power during RES than during the read or write operation. This paper focuses on the reduction in power dissipation by optimisation of number of RESs, which are required to sensitise the dRDF during test mode of operation of SRAM. The novel pre-charge architecture has been proposed in order to reduce the power dissipation by limiting the number of RESs to an optimised number of two. The proposed low power architecture is simulated and analysed which shows reduction in power dissipation by reducing the number of RESs up to 18.18%.

  17. Optimal sensor placement for modal testing on wind turbines

    NASA Astrophysics Data System (ADS)

    Schulze, Andreas; Zierath, János; Rosenow, Sven-Erik; Bockhahn, Reik; Rachholz, Roman; Woernle, Christoph

    2016-09-01

    The mechanical design of wind turbines requires a profound understanding of the dynamic behaviour. Even though highly detailed simulation models are already in use to support wind turbine design, modal testing on a real prototype is irreplaceable to identify site-specific conditions such as the stiffness of the tower foundation. Correct identification of the mode shapes of a complex mechanical structure much depends on the placement of the sensors. For operational modal analysis of a 3 MW wind turbine with a 120 m rotor on a 100 m tower developed by W2E Wind to Energy, algorithms for optimal placement of acceleration sensors are applied. The mode shapes used for the optimisation are calculated by means of a detailed flexible multibody model of the wind turbine. Among the three algorithms in this study, the genetic algorithm with weighted off-diagonal criterion yields the sensor configuration with the highest quality. The ongoing measurements on the prototype will be the basis for the development of optimised wind turbine designs.

  18. Focal psychodynamic therapy, cognitive behaviour therapy, and optimised treatment as usual in outpatients with anorexia nervosa (ANTOP study): randomised controlled trial.

    PubMed

    Zipfel, Stephan; Wild, Beate; Groß, Gaby; Friederich, Hans-Christoph; Teufel, Martin; Schellberg, Dieter; Giel, Katrin E; de Zwaan, Martina; Dinkel, Andreas; Herpertz, Stephan; Burgmer, Markus; Löwe, Bernd; Tagay, Sefik; von Wietersheim, Jörn; Zeeck, Almut; Schade-Brittinger, Carmen; Schauenburg, Henning; Herzog, Wolfgang

    2014-01-11

    Psychotherapy is the treatment of choice for patients with anorexia nervosa, although evidence of efficacy is weak. The Anorexia Nervosa Treatment of OutPatients (ANTOP) study aimed to assess the efficacy and safety of two manual-based outpatient treatments for anorexia nervosa--focal psychodynamic therapy and enhanced cognitive behaviour therapy--versus optimised treatment as usual. The ANTOP study is a multicentre, randomised controlled efficacy trial in adults with anorexia nervosa. We recruited patients from ten university hospitals in Germany. Participants were randomly allocated to 10 months of treatment with either focal psychodynamic therapy, enhanced cognitive behaviour therapy, or optimised treatment as usual (including outpatient psychotherapy and structured care from a family doctor). The primary outcome was weight gain, measured as increased body-mass index (BMI) at the end of treatment. A key secondary outcome was rate of recovery (based on a combination of weight gain and eating disorder-specific psychopathology). Analysis was by intention to treat. This trial is registered at http://isrctn.org, number ISRCTN72809357. Of 727 adults screened for inclusion, 242 underwent randomisation: 80 to focal psychodynamic therapy, 80 to enhanced cognitive behaviour therapy, and 82 to optimised treatment as usual. At the end of treatment, 54 patients (22%) were lost to follow-up, and at 12-month follow-up a total of 73 (30%) had dropped out. At the end of treatment, BMI had increased in all study groups (focal psychodynamic therapy 0·73 kg/m(2), enhanced cognitive behaviour therapy 0·93 kg/m(2), optimised treatment as usual 0·69 kg/m(2)); no differences were noted between groups (mean difference between focal psychodynamic therapy and enhanced cognitive behaviour therapy -0·45, 95% CI -0·96 to 0·07; focal psychodynamic therapy vs optimised treatment as usual -0·14, -0·68 to 0·39; enhanced cognitive behaviour therapy vs optimised treatment as usual -0·30, -0·22 to 0·83). At 12-month follow-up, the mean gain in BMI had risen further (1·64 kg/m(2), 1·30 kg/m(2), and 1·22 kg/m(2), respectively), but no differences between groups were recorded (0·10, -0·56 to 0·76; 0·25, -0·45 to 0·95; 0·15, -0·54 to 0·83, respectively). No serious adverse events attributable to weight loss or trial participation were recorded. Optimised treatment as usual, combining psychotherapy and structured care from a family doctor, should be regarded as solid baseline treatment for adult outpatients with anorexia nervosa. Focal psychodynamic therapy proved advantageous in terms of recovery at 12-month follow-up, and enhanced cognitive behaviour therapy was more effective with respect to speed of weight gain and improvements in eating disorder psychopathology. Long-term outcome data will be helpful to further adapt and improve these novel manual-based treatment approaches. German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), German Eating Disorders Diagnostic and Treatment Network (EDNET). Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Investigating the Trade-Off Between Power Generation and Environmental Impact of Tidal-Turbine Arrays Using Array Layout Optimisation and Habitat Sustainability Modelling.

    NASA Astrophysics Data System (ADS)

    du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.

    2016-12-01

    The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.

  20. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  1. Base flow calibration in a global hydrological model

    NASA Astrophysics Data System (ADS)

    van Beek, L. P.; Bierkens, M. F.

    2006-12-01

    Base flow constitutes an important water resource in many parts of the world. Its provenance and yield over time are governed by the storage capacity of local aquifers and the internal drainage paths, which are difficult to capture at the global scale. To represent the spatial and temporal variability in base flow adequately in a distributed global model at 0.5 degree resolution, we resorted to the conceptual model of aquifer storage of Kraaijenhoff- van de Leur (1958) that yields the reservoir coefficient for a linear groundwater store. This model was parameterised using global information on drainage density, climatology and lithology. Initial estimates of aquifer thickness, permeability and specific porosity from literature were linked to the latter two categories and calibrated to low flow data by means of simulated annealing so as to conserve the ordinal information contained by them. The observations used stem from the RivDis dataset of monthly discharge. From this dataset 324 stations were selected with at least 10 years of observations in the period 1958-1991 and an areal coverage of at least 10 cells of 0.5 degree. The dataset was split between basins into a calibration and validation set whilst preserving a representative distribution of lithology types and climate zones. Optimisation involved minimising the absolute differences between the simulated base flow and the lowest 10% of the observed monthly discharge. Subsequently, the reliability of the calibrated parameters was tested by reversing the calibration and validation sets.

  2. H∞ output tracking control of uncertain and disturbed nonlinear systems based on neural network model

    NASA Astrophysics Data System (ADS)

    Li, Chengcheng; Li, Yuefeng; Wang, Guanglin

    2017-07-01

    The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.

  3. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  4. An improved predictive functional control method with application to PMSM systems

    NASA Astrophysics Data System (ADS)

    Li, Shihua; Liu, Huixian; Fu, Wenshu

    2017-01-01

    In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.

  5. The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management.

    PubMed

    Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo

    2017-10-27

    The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012-2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. Copyright © 2017 European Crohn’s and Colitis Organisation (ECCO). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com

  6. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  7. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation

    PubMed Central

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230

  8. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    PubMed

    Illias, Hazlee Azil; Zhao Liang, Wee

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  9. Acquisition of business intelligence from human experience in route planning

    NASA Astrophysics Data System (ADS)

    Bello Orgaz, Gema; Barrero, David F.; R-Moreno, María D.; Camacho, David

    2015-04-01

    The logistic sector raises a number of highly challenging problems. Probably one of the most important ones is the shipping planning, i.e. plan the routes that the shippers have to follow to deliver the goods. In this article, we present an artificial intelligence-based solution that has been designed to help a logistic company to improve its routes planning process. In order to achieve this goal, the solution uses the knowledge acquired by the company drivers to propose optimised routes. Hence, the proposed solution gathers the experience of the drivers, processes it and optimises the delivery process. The solution uses data mining to extract knowledge from the company information systems and prepares it for analysis with a case-based reasoning (CBR) algorithm. The CBR obtains critical business intelligence knowledge from the drivers experience that is needed by the planner. The design of the routes is done by a genetic algorithm that, given the processed information, optimises the routes following several objectives, such as minimise the distance or time. Experimentation shows that the proposed approach is able to find routes that improve, on average, the routes made by the human experts.

  10. Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.

    PubMed

    Garnavi, Rahil; Aldeen, Mohammad; Bailey, James

    2012-11-01

    This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.

  11. The optimisation of the laser-induced forward transfer process for fabrication of polyfluorene-based organic light-emitting diode pixels

    NASA Astrophysics Data System (ADS)

    Shaw-Stewart, James; Mattle, Thomas; Lippert, Thomas; Nagel, Matthias; Nüesch, Frank; Wokaun, Alexander

    2013-08-01

    Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.

  12. Optimisation of a propagation-based x-ray phase-contrast micro-CT system

    NASA Astrophysics Data System (ADS)

    Nesterets, Yakov I.; Gureyev, Timur E.; Dimmock, Matthew R.

    2018-03-01

    Micro-CT scanners find applications in many areas ranging from biomedical research to material sciences. In order to provide spatial resolution on a micron scale, these scanners are usually equipped with micro-focus, low-power x-ray sources and hence require long scanning times to produce high resolution 3D images of the object with acceptable contrast-to-noise. Propagation-based phase-contrast tomography (PB-PCT) has the potential to significantly improve the contrast-to-noise ratio (CNR) or, alternatively, reduce the image acquisition time while preserving the CNR and the spatial resolution. We propose a general approach for the optimisation of the PB-PCT imaging system. When applied to an imaging system with fixed parameters of the source and detector this approach requires optimisation of only two independent geometrical parameters of the imaging system, i.e. the source-to-object distance R 1 and geometrical magnification M, in order to produce the best spatial resolution and CNR. If, in addition to R 1 and M, the system parameter space also includes the source size and the anode potential this approach allows one to find a unique configuration of the imaging system that produces the required spatial resolution and the best CNR.

  13. Modelling low velocity impact induced damage in composite laminates

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Soutis, Constantinos

    2017-12-01

    The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.

  14. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, N. A. S., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Correia, T. M., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Rokosz, M. K., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to supportmore » the design of optimised electrocaloric units and operating conditions.« less

  15. Parameterisation of Biome BGC to assess forest ecosystems in Africa

    NASA Astrophysics Data System (ADS)

    Gautam, Sishir; Pietsch, Stephan A.

    2010-05-01

    African forest ecosystems are an important environmental and economic resource. Several studies show that tropical forests are critical to society as economic, environmental and societal resources. Tropical forests are carbon dense and thus play a key role in climate change mitigation. Unfortunately, the response of tropical forests to environmental change is largely unknown owing to insufficient spatially extensive observations. Developing regions like Africa where records of forest management for long periods are unavailable the process-based ecosystem simulation model - BIOME BGC could be a suitable tool to explain forest ecosystem dynamics. This ecosystem simulation model uses descriptive input parameters to establish the physiology, biochemistry, structure, and allocation patterns within vegetation functional types, or biomes. Undocumented parameters for larger-resolution simulations are currently the major limitations to regional modelling in African forest ecosystems. This study was conducted to document input parameters for BIOME-BGC for major natural tropical forests in the Congo basin. Based on available literature and field measurements updated values for turnover and mortality, allometry, carbon to nitrogen ratios, allocation of plant material to labile, cellulose, and lignin pools, tree morphology and other relevant factors were assigned. Daily climate input data for the model applications were generated using the statistical weather generator MarkSim. The forest was inventoried at various sites and soil samples of corresponding stands across Gabon were collected. Carbon and nitrogen in the collected soil samples were determined from soil analysis. The observed tree volume, soil carbon and soil nitrogen were then compared with the simulated model outputs to evaluate the model performance. Furthermore, the simulation using Congo Basin specific parameters and generalised BIOME BGC parameters for tropical evergreen broadleaved tree species were also executed and the simulated results compared. Once the model was optimised for forests in the Congo basin it was validated against observed tree volume, soil carbon and soil nitrogen from a set of independent plots.

  16. Auto-calibration of a one-dimensional hydrodynamic-ecological model using a Monte Carlo approach: simulation of hypoxic events in a polymictic lake

    NASA Astrophysics Data System (ADS)

    Luo, L.

    2011-12-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.

  17. Finite element analysis of the design and manufacture of thin-walled pressure vessels used as aerosol cans

    NASA Astrophysics Data System (ADS)

    Abdussalam, Ragba Mohamed

    Thin-walled cylinders are used extensively in the food packaging and cosmetics industries. The cost of material is a major contributor to the overall cost and so improvements in design and manufacturing processes are always being sought. Shape optimisation provides one method for such improvements. Aluminium aerosol cans are a particular form of thin-walled cylinder with a complex shape consisting of truncated cone top, parallel cylindrical section and inverted dome base. They are manufactured in one piece by a reverse-extrusion process, which produces a vessel with a variable thickness from 0.31 mm in the cylinder up to 1.31 mm in the base for a 53 mm diameter can. During manufacture, packaging and charging, they are subjected to pressure, axial and radial loads and design calculations are generally outside the British and American pressure vessel codes. 'Design-by-test' appears to be the favoured approach. However, a more rigorous approach is needed in order to optimise the designs. Finite element analysis (FEA) is a powerful tool for predicting stress, strain and displacement behaviour of components and structures. FEA is also used extensively to model manufacturing processes. In this study, elastic and elastic-plastic FEA has been used to develop a thorough understanding of the mechanisms of yielding, 'dome reversal' (an inherent safety feature, where the base suffers elastic-plastic buckling at a pressure below the burst pressure) and collapse due to internal pressure loading and how these are affected by geometry. It has also been used to study the buckling behaviour under compressive axial loading. Furthermore, numerical simulations of the extrusion process (in order to investigate the effects of tool geometry, friction coefficient and boundary conditions) have been undertaken. Experimental verification of the buckling and collapse behaviours has also been carried out and there is reasonable agreement between the experimental data and the numerical predictions.

  18. AlphaMate: a program for optimising selection, maintenance of diversity, and mate allocation in breeding programs.

    PubMed

    Gorjanc, Gregor; Hickey, John M

    2018-05-02

    AlphaMate is a flexible program that optimises selection, maintenance of genetic diversity, and mate allocation in breeding programs. It can be used in animal and cross- and self-pollinating plant populations. These populations can be subject to selective breeding or conservation management. The problem is formulated as a multi-objective optimisation of a valid mating plan that is solved with an evolutionary algorithm. A valid mating plan is defined by a combination of mating constraints (the number of matings, the maximal number of parents, the minimal/equal/maximal number of contributions per parent, or allowance for selfing) that are gender specific or generic. The optimisation can maximize genetic gain, minimize group coancestry, minimize inbreeding of individual matings, or maximize genetic gain for a given increase in group coancestry or inbreeding. Users provide a list of candidate individuals with associated gender and selection criteria information (if applicable) and coancestry matrix. Selection criteria and coancestry matrix can be based on pedigree or genome-wide markers. Additional individual or mating specific information can be included to enrich optimisation objectives. An example of rapid recurrent genomic selection in wheat demonstrates how AlphaMate can double the efficiency of converting genetic diversity into genetic gain compared to truncation selection. Another example demonstrates the use of genome editing to expand the gain-diversity frontier. Executable versions of AlphaMate for Windows, Mac, and Linux platforms are available at http://www.AlphaGenes.roslin.ed.ac.uk/AlphaMate. gregor.gorjanc@roslin.ed.ack.uk.

  19. Dietary optimisation with omega-3 and omega-6 fatty acids for 12-23-month-old overweight and obese children in urban Jakarta.

    PubMed

    Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina

    2016-12-01

    Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.

  20. The Optimisation of the Expression of Recombinant Surface Immunogenic Protein of Group B Streptococcus in Escherichia coli by Response Surface Methodology Improves Humoral Immunity.

    PubMed

    Díaz-Dinamarca, Diego A; Jerias, José I; Soto, Daniel A; Soto, Jorge A; Díaz, Natalia V; Leyton, Yessica Y; Villegas, Rodrigo A; Kalergis, Alexis M; Vásquez, Abel E

    2018-03-01

    Group B Streptococcus (GBS) is the leading cause of neonatal meningitis and a common pathogen in livestock and aquaculture industries around the world. Conjugate polysaccharide and protein-based vaccines are under development. The surface immunogenic protein (SIP) is a conserved protein in all GBS serotypes and has been shown to be a good target for vaccine development. The expression of recombinant proteins in Escherichia coli cells has been shown to be useful in the development of vaccines, and the protein purification is a factor affecting their immunogenicity. The response surface methodology (RSM) and Box-Behnken design can optimise the performance in the expression of recombinant proteins. However, the biological effect in mice immunised with an immunogenic protein that is optimised by RSM and purified by low-affinity chromatography is unknown. In this study, we used RSM for the optimisation of the expression of the rSIP, and we evaluated the SIP-specific humoral response and the property to decrease the GBS colonisation in the vaginal tract in female mice. It was observed by NI-NTA chromatography that the RSM increases the yield in the expression of rSIP, generating a better purification process. This improvement in rSIP purification suggests a better induction of IgG anti-SIP immune response and a positive effect in the decreased GBS intravaginal colonisation. The RSM applied to optimise the expression of recombinant proteins with immunogenic capacity is an interesting alternative in the evaluation of vaccines in preclinical phase, which could improve their immune response.

  1. The robust model predictive control based on mixed H2/H∞ approach with separated performance formulations and its ISpS analysis

    NASA Astrophysics Data System (ADS)

    Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong

    2017-12-01

    In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.

  2. Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.

    PubMed

    Kramar, A; Turk, S; Vrecer, F

    2003-04-30

    The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.

  3. Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0

    DOE PAGES

    Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; ...

    2016-08-25

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less

  4. Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.

    PubMed

    Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F

    2004-01-01

    Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.

  5. Modelling and Analysis of a New Piezoelectric Dynamic Balance Regulator

    PubMed Central

    Du, Zhe; Mei, Xue-Song; Xu, Mu-Xun

    2012-01-01

    In this paper, a new piezoelectric dynamic balance regulator, which can be used in motorised spindle systems, is presented. The dynamic balancing adjustment mechanism is driven by an in-plane bending vibration from an annular piezoelectric stator excited by a high-frequency sinusoidal input voltage. This device has different construction, characteristics and operating principles than a conventional balance regulator. In this work, a dynamic model of the regulator is first developed using a detailed analytical method. Thereafter, MATLAB is employed to numerically simulate the relations between the dominant parameters and the characteristics of the regulator based on thedynamic model. Finally, experimental measurements are used to certify the validity of the dynamic model. Consequently, the mathematical model presented and analysed in this paper can be used as a tool for optimising the design of a piezoelectric dynamic balance regulator during steady state operation. PMID:23202182

  6. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    PubMed

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. A method for measuring particle number emissions from vehicles driving on the road.

    PubMed

    Shi, J P; Harrison, R M; Evans, D E; Alam, A; Barnes, C; Carter, G

    2002-01-01

    Earlier research has demonstrated that the conditions of dilution of engine exhaust gases profoundly influence the size distribution and total number of particles emitted. Since real world dilution conditions are variable and therefore difficult to simulate, this research has sought to develop and validate a method for measuring particle number emissions from vehicles driving past on a road. This has been achieved successfully using carbon dioxide as a tracer of exhaust gas dilution. By subsequent adjustment of data to a constant dilution factor, it is possible to compare emissions from different vehicles using different technologies and fuels based upon real world emission data. Whilst further optimisation of the technique, especially in terms of matching the instrument response times is desirable, the measurements offer useful insights into emissions from gasoline and diesel vehicles, and the substantial proportion of particles emitted in the 3-7 nanometre size range.

  8. Design of a wideband CMOS impedance spectroscopy ASIC analog front-end for multichannel biosensor interfaces.

    PubMed

    Valente, Virgilio; Dai Jiang; Demosthenous, Andreas

    2015-08-01

    This paper presents the preliminary design and simulation of a flexible and programmable analog front-end (AFE) circuit with current and voltage readout capabilities for electric impedance spectroscopy (EIS). The AFE is part of a fully integrated multifrequency EIS platform. The current readout comprises of a transimpedance stage and an automatic gain control (AGC) unit designed to accommodate impedance changes larger than 3 order of magnitude. The AGC is based on a dynamic peak detector that tracks changes in the input current over time and regulates the gain of a programmable gain amplifier in order to optimise the signal-to-noise ratio. The system works up to 1 MHz. The voltage readout consists of a 2 stages of fully differential current-feedback instrumentation amplifier which provide 100 dB of CMRR and a programmable gain up to 20 V/V per stage with a bandwidth in excess of 10MHz.

  9. Multi-tissue partial volume quantification in multi-contrast MRI using an optimised spectral unmixing approach.

    PubMed

    Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme

    2018-06-01

    Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Towards a more detailed representation of high-latitude vegetation in the global land surface model ORCHIDEE (ORC-HL-VEGv1.0)

    NASA Astrophysics Data System (ADS)

    Druel, Arsène; Peylin, Philippe; Krinner, Gerhard; Ciais, Philippe; Viovy, Nicolas; Peregon, Anna; Bastrikov, Vladislav; Kosykh, Natalya; Mironycheva-Tokareva, Nina

    2017-12-01

    Simulation of vegetation-climate feedbacks in high latitudes in the ORCHIDEE land surface model was improved by the addition of three new circumpolar plant functional types (PFTs), namely non-vascular plants representing bryophytes and lichens, Arctic shrubs and Arctic C3 grasses. Non-vascular plants are assigned no stomatal conductance, very shallow roots, and can desiccate during dry episodes and become active again during wet periods, which gives them a larger phenological plasticity (i.e. adaptability and resilience to severe climatic constraints) compared to grasses and shrubs. Shrubs have a specific carbon allocation scheme, and differ from trees by their larger survival rates in winter, due to protection by snow. Arctic C3 grasses have the same equations as in the original ORCHIDEE version, but different parameter values, optimised from in situ observations of biomass and net primary productivity (NPP) in Siberia. In situ observations of living biomass and productivity from Siberia were used to calibrate the parameters of the new PFTs using a Bayesian optimisation procedure. With the new PFTs, we obtain a lower NPP by 31 % (from 55° N), as well as a lower roughness length (-41 %), transpiration (-33 %) and a higher winter albedo (by +3.6 %) due to increased snow cover. A simulation of the water balance and runoff and drainage in the high northern latitudes using the new PFTs results in an increase of fresh water discharge in the Arctic ocean by 11 % (+140 km3 yr-1), owing to less evapotranspiration. Future developments should focus on the competition between these three PFTs and boreal tree PFTs, in order to simulate their area changes in response to climate change, and the effect of carbon-nitrogen interactions.

  11. The Impact of Subsidies on the Ecological Sustainability and Future Profits from North Sea Fisheries

    PubMed Central

    Heymans, Johanna Jacomina; Mackinson, Steven; Sumaila, Ussif Rashid; Dyck, Andrew; Little, Alyson

    2011-01-01

    Background This study examines the impact of subsidies on the profitability and ecological stability of the North Sea fisheries over the past 20 years. It shows the negative impact that subsidies can have on both the biomass of important fish species and the possible profit from fisheries. The study includes subsidies in an ecosystem model of the North Sea and examines the possible effects of eliminating fishery subsidies. Methodology/Principal Findings Hindcast analysis between 1991 and 2003 indicates that subsidies reduced the profitability of the fishery even though gross revenue might have been high for specific fisheries sectors. Simulations seeking to maximise the total revenue between 2004 and 2010 suggest that this can be achieved by increasing the effort of Nephrops trawlers, beam trawlers, and the pelagic trawl-and-seine fleet, while reducing the effort of demersal trawlers. Simulations show that ecological stability can be realised by reducing the effort of the beam trawlers, Nephrops trawlers, pelagic- and demersal trawl-and-seine fleets. This analysis also shows that when subsidies are included, effort will always be higher for all fleets, because it effectively reduces the cost of fishing. Conclusions/Significance The study found that while removing subsidies might reduce the total catch and revenue, it increases the overall profitability of the fishery and the total biomass of commercially important species. For example, cod, haddock, herring and plaice biomass increased over the simulation when optimising for profit, and when optimising for ecological stability, the biomass for cod, plaice and sole also increased. When subsidies are eliminated, the study shows that rather than forcing those involved in the fishery into the red, fisheries become more profitable, despite a decrease in total revenue due to a loss of subsidies from the government. PMID:21637848

  12. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The Aqua-Planet Experiment (APE): CONTROL SST Simulation

    NASA Technical Reports Server (NTRS)

    Blackburn, Michael; Williamson, David L.; Nakajima, Kensuke; Ohfuchi, Wataru; Takahashi, Yoshiyuki O.; Hayashi, Yoshi-Yuki; Nakamura, Hisashi; Ishiwatari, Masaki; Mcgregor, John L.; Borth, Hartmut; hide

    2013-01-01

    Climate simulations by 16 atmospheric general circulation models (AGCMs) are compared on an aqua-planet, a water-covered Earth with prescribed sea surface temperature varying only in latitude. The idealised configuration is designed to expose differences in the circulation simulated by different models. Basic features of the aqua-planet climate are characterised by comparison with Earth. The models display a wide range of behaviour. The balanced component of the tropospheric mean flow, and mid-latitude eddy covariances subject to budget constraints, vary relatively little among the models. In contrast, differences in damping in the dynamical core strongly influence transient eddy amplitudes. Historical uncertainty in modelled lower stratospheric temperatures persists in APE.Aspects of the circulation generated more directly by interactions between the resolved fluid dynamics and parameterized moist processes vary greatly. The tropical Hadley circulation forms either a single or double inter-tropical convergence zone (ITCZ) at the equator, with large variations in mean precipitation. The equatorial wave spectrum shows a wide range of precipitation intensity and propagation characteristics. Kelvin mode-like eastward propagation with remarkably constant phase speed dominates in most models. Westward propagation, less dispersive than the equatorial Rossby modes, dominates in a few models or occurs within an eastward propagating envelope in others. The mean structure of the ITCZ is related to precipitation variability, consistent with previous studies.The aqua-planet global energy balance is unknown but the models produce a surprisingly large range of top of atmosphere global net flux, dominated by differences in shortwave reflection by clouds. A number of newly developed models, not optimised for Earth climate, contribute to this. Possible reasons for differences in the optimised models are discussed.The aqua-planet configuration is intended as one component of an experimental hierarchy used to evaluate AGCMs. This comparison does suggest that the range of model behaviour could be better understood and reduced in conjunction with Earth climate simulations. Controlled experimentation is required to explore individual model behavior and investigate convergence of the aqua-planet climate with increasing resolution.

  14. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  15. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  16. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  17. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  18. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model

    PubMed Central

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies’ business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and “what-if” scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results. PMID:26871694

  19. Optimised mounting conditions for poly (ether sulfone) in radiation detection.

    PubMed

    Nakamura, Hidehito; Shirakawa, Yoshiyuki; Sato, Nobuhiro; Yamada, Tatsuya; Kitamura, Hisashi; Takahashi, Sentaro

    2014-09-01

    Poly (ether sulfone) (PES) is a candidate for use as a scintillation material in radiation detection. Its characteristics, such as its emission spectrum and its effective refractive index (based on the emission spectrum), directly affect the propagation of light generated to external photodetectors. It is also important to examine the presence of background radiation sources in manufactured PES. Here, we optimise the optical coupling and surface treatment of the PES, and characterise its background. Optical grease was used to enhance the optical coupling between the PES and the photodetector; absorption by the grease of short-wavelength light emitted from PES was negligible. Diffuse reflection induced by surface roughening increased the light yield for PES, despite the high effective refractive index. Background radiation derived from the PES sample and its impurities was negligible above the ambient, natural level. Overall, these results serve to optimise the mounting conditions for PES in radiation detection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Improving linear transport infrastructure efficiency by automated learning and optimised predictive maintenance techniques (INFRALERT)

    NASA Astrophysics Data System (ADS)

    Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele

    2017-09-01

    The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.

  2. Development and optimisation of atorvastatin calcium loaded self-nanoemulsifying drug delivery system (SNEDDS) for enhancing oral bioavailability: in vitro and in vivo evaluation.

    PubMed

    Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M

    2017-05-01

    The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.

  3. BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities.

    PubMed

    Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark

    2016-11-01

    Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    PubMed

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  6. Generating a Dynamic Synthetic Population – Using an Age-Structured Two-Sex Model for Household Dynamics

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Mokhtarian, Payam; Perez, Pascal

    2014-01-01

    Generating a reliable computer-simulated synthetic population is necessary for knowledge processing and decision-making analysis in agent-based systems in order to measure, interpret and describe each target area and the human activity patterns within it. In this paper, both synthetic reconstruction (SR) and combinatorial optimisation (CO) techniques are discussed for generating a reliable synthetic population for a certain geographic region (in Australia) using aggregated- and disaggregated-level information available for such an area. A CO algorithm using the quadratic function of population estimators is presented in this paper in order to generate a synthetic population while considering a two-fold nested structure for the individuals and households within the target areas. The baseline population in this study is generated from the confidentialised unit record files (CURFs) and 2006 Australian census tables. The dynamics of the created population is then projected over five years using a dynamic micro-simulation model for individual- and household-level demographic transitions. This projection is then compared with the 2011 Australian census. A prediction interval is provided for the population estimates obtained by the bootstrapping method, by which the variability structure of a predictor can be replicated in a bootstrap distribution. PMID:24733522

  7. Analysis on flexible manufacturing system layout using arena simulation software

    NASA Astrophysics Data System (ADS)

    Fadzly, M. K.; Saad, Mohd Sazli; Shayfull, Z.

    2017-09-01

    Flexible manufacturing system (FMS) was defined as highly automated group technology machine cell, consisting of a group of processing stations interconnected by an automated material handling and storage system, and controlled by an integrated computer system. FMS can produce parts or products are in the mid-volume, mid-variety production range. The layout system in FMS is an important criterion to design the FMS system to produce a part or product. This facility layout of an FMS involves the positioning of cells within given boundaries, so as to minimize the total projected travel time between cells. Defining the layout includes specifying the spatial coordinates of each cell, its orientation in either a horizontal or vertical position, and the location of its load or unloads point. There are many types of FMS layout such as In-line, loop ladder and robot centered cell layout. The research is concentrating on the design and optimization FMS layout. The final conclusion can be summarized that the objective to design and optimisation of FMS layout for this study is successful because the FMS In-line layout is the best layout based on effective time and cost using ARENA simulation software.

  8. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  9. Numerical Modelling of Effects of Biphasic Layers of Corrosion Products to the Degradation of Magnesium Metal In Vitro

    PubMed Central

    Ahmed, Safia K.; Ward, John P.; Liu, Yang

    2017-01-01

    Magnesium (Mg) is becoming increasingly popular for orthopaedic implant materials. Its mechanical properties are closer to bone than other implant materials, allowing for more natural healing under stresses experienced during recovery. Being biodegradable, it also eliminates the requirement of further surgery to remove the hardware. However, Mg rapidly corrodes in clinically relevant aqueous environments, compromising its use. This problem can be addressed by alloying the Mg, but challenges remain at optimising the properties of the material for clinical use. In this paper, we present a mathematical model to provide a systematic means of quantitatively predicting Mg corrosion in aqueous environments, providing a means of informing standardisation of in vitro investigation of Mg alloy corrosion to determine implant design parameters. The model describes corrosion through reactions with water, to produce magnesium hydroxide Mg(OH)2, and subsequently with carbon dioxide to form magnesium carbonate MgCO3. The corrosion products produce distinct protective layers around the magnesium block that are modelled as porous media. The resulting model of advection–diffusion equations with multiple moving boundaries was solved numerically using asymptotic expansions to deal with singular cases. The model has few free parameters, and it is shown that these can be tuned to predict a full range of corrosion rates, reflecting differences between pure magnesium or magnesium alloys. Data from practicable in vitro experiments can be used to calibrate the model’s free parameters, from which model simulations using in vivo relevant geometries provide a cheap first step in optimising Mg-based implant materials. PMID:29267244

  10. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, James; Ford, Ian J.

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, andmore » complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable “gauge” transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.« less

  11. Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models

    PubMed Central

    Prinold, Joe A.I.; Bull, Anthony M.J.

    2014-01-01

    Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621

  12. Assessing the potential impacts of a revised set of on-farm nutrient and sediment 'basic' control measures for reducing agricultural diffuse pollution across England.

    PubMed

    Collins, A L; Newell Price, J P; Zhang, Y; Gooday, R; Naden, P S; Skirvin, D

    2018-04-15

    The need for improved abatement of agricultural diffuse water pollution represents cause for concern throughout the world. A critical aspect in the design of on-farm intervention programmes concerns the potential technical cost-effectiveness of packages of control measures. The European Union (EU) Water Framework Directive (WFD) calls for Programmes of Measures (PoMs) to protect freshwater environments and these comprise 'basic' (mandatory) and 'supplementary' (incentivised) options. Recent work has used measure review, elicitation of stakeholder attitudes and a process-based modelling framework to identify a new alternative set of 'basic' agricultural sector control measures for nutrient and sediment abatement across England. Following an initial scientific review of 708 measures, 90 were identified for further consideration at an industry workshop and 63 had industry support. Optimisation modelling was undertaken to identify a shortlist of measures using the Demonstration Test Catchments as sentinel agricultural landscapes. Optimisation selected 12 measures relevant to livestock or arable systems. Model simulations of 95% implementation of these 12 candidate 'basic' measures, in addition to business-as-usual, suggested reductions in the national agricultural nitrate load of 2.5%, whilst corresponding reductions in phosphorus and sediment were 11.9% and 5.6%, respectively. The total cost of applying the candidate 'basic' measures across the whole of England was estimated to be £450 million per annum, which is equivalent to £52 per hectare of agricultural land. This work contributed to a public consultation in 2016. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  13. Optimisation of simulated team training through the application of learning theories: a debate for a conceptual framework

    PubMed Central

    2014-01-01

    Background As a conceptual review, this paper will debate relevant learning theories to inform the development, design and delivery of an effective educational programme for simulated team training relevant to health professionals. Discussion Kolb’s experiential learning theory is used as the main conceptual framework to define the sequence of activities. Dewey’s theory of reflective thought and action, Jarvis modification of Kolb’s learning cycle and Schön’s reflection-on-action serve as a model to design scenarios for optimal concrete experience and debriefing for challenging participants’ beliefs and habits. Bandura’s theory of self-efficacy and newer socio-cultural learning models outline that for efficient team training, it is mandatory to introduce the social-cultural context of a team. Summary The ideal simulated team training programme needs a scenario for concrete experience, followed by a debriefing with a critical reflexive observation and abstract conceptualisation phase, and ending with a second scenario for active experimentation. Let them re-experiment to optimise the effect of a simulated training session. Challenge them to the edge: The scenario needs to challenge participants to generate failures and feelings of inadequacy to drive and motivate team members to critical reflect and learn. Not experience itself but the inadequacy and contradictions of habitual experience serve as basis for reflection. Facilitate critical reflection: Facilitators and group members must guide and motivate individual participants through the debriefing session, inciting and empowering learners to challenge their own beliefs and habits. To do this, learners need to feel psychological safe. Let the group talk and critical explore. Motivate with reality and context: Training with multidisciplinary team members, with different levels of expertise, acting in their usual environment (in-situ simulation) on physiological variables is mandatory to introduce cultural context and social conditions to the learning experience. Embedding in situ team training sessions into a teaching programme to enable repeated training and to assess regularly team performance is mandatory for a cultural change of sustained improvement of team performance and patient safety. PMID:24694243

  14. Development of a Compton suppressed gamma spectrometer using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Britton, Richard

    Gamma ray spectroscopy is routinely used to measure radiation in a number of situations. These include security applications, nuclear forensics studies, characterisation of radioactive sources, and environmental monitoring. For routine studies of environmental materials, the amount of radioactivity present is often very low, requiring spectroscopy systems which have to monitor the source for up to 7 days to achieve the required sensitivity. Recent developments in detector technology and data processing techniques have opened up the possibility of developing a highly efficient Compton Suppressed system, that was previously the preserve of large experimental collaborations. The accessibility of Monte-Carlo toolkits such as GEANT4 also provide the opportunity to optimise these systems using computer simulations, greatly reducing the need for expensive (and inefficient) testing in the laboratory. This thesis details the development of such a Compton Suppressed, planar HPGe detector system. Using the GEANT4 toolkit in combination with the experimental facilities at AWE, Aldermaston (which include HPGe detection systems, scintillator based detector systems, advanced shielding materials and gamma-gamma coincidence systems), simulations were built and validated to reproduce the detector response seen in the 'real-life' systems. This resulted in several improvements to the current system; for the shielding materials used, terrestrial and cosmic radiation were minimised, while reducing the X-ray fluorescence seen in the primary HPGe detector by an order of magnitude. With respect to the HPGe detector itself, an optimum thickness was identified for low energy (<300 keV) radiation, which maximised the efficiency for the energy range of interest while minimising the interaction probability for higher energy radionuclides (which are the primary cause of the Compton continuum that obscures lower energy decays). A combination of secondary detectors were then optimised to design a Compton Suppression system for the primary detector, which could improve the performance of the current Compton Suppression system by an order of magnitude. This equates to a reduction of the continuum by up to a factor of 240 for a nuclide such as Co-60, which is crucial for the detection of low-energy, low-activity emitters typically swamped by such a continuum. Finally, thoroughly optimised acquisition and analysis software has also been written to process data created by future high sensitivity gamma coincidence systems. This includes modules for the creation of histograms, coincidence matrices, and an ASCII to binary converter (for historical data) that has resulted in an analysis speed increase of up to 20000 times when compared to the software originally used for the extraction of coincidence information. Modules for low-energy time-walk correction and the removal of accidental coincidences are also included, which represent a capability that was not previously available.

  15. Evaluation of the BreastSimulator software platform for breast tomography

    NASA Astrophysics Data System (ADS)

    Mettivier, G.; Bliznakova, K.; Sechopoulos, I.; Boone, J. M.; Di Lillo, F.; Sarno, A.; Castriconi, R.; Russo, P.

    2017-08-01

    The aim of this work was the evaluation of the software BreastSimulator, a breast x-ray imaging simulation software, as a tool for the creation of 3D uncompressed breast digital models and for the simulation and the optimization of computed tomography (CT) scanners dedicated to the breast. Eight 3D digital breast phantoms were created with glandular fractions in the range 10%-35%. The models are characterised by different sizes and modelled realistic anatomical features. X-ray CT projections were simulated for a dedicated cone-beam CT scanner and reconstructed with the FDK algorithm. X-ray projection images were simulated for 5 mono-energetic (27, 32, 35, 43 and 51 keV) and 3 poly-energetic x-ray spectra typically employed in current CT scanners dedicated to the breast (49, 60, or 80 kVp). Clinical CT images acquired from two different clinical breast CT scanners were used for comparison purposes. The quantitative evaluation included calculation of the power-law exponent, β, from simulated and real breast tomograms, based on the power spectrum fitted with a function of the spatial frequency, f, of the form S(f)  =  α/f   β . The breast models were validated by comparison against clinical breast CT and published data. We found that the calculated β coefficients were close to that of clinical CT data from a dedicated breast CT scanner and reported data in the literature. In evaluating the software package BreastSimulator to generate breast models suitable for use with breast CT imaging, we found that the breast phantoms produced with the software tool can reproduce the anatomical structure of real breasts, as evaluated by calculating the β exponent from the power spectral analysis of simulated images. As such, this research tool might contribute considerably to the further development, testing and optimisation of breast CT imaging techniques.

  16. The multiple roles of computational chemistry in fragment-based drug design

    NASA Astrophysics Data System (ADS)

    Law, Richard; Barker, Oliver; Barker, John J.; Hesterkamp, Thomas; Godemann, Robert; Andersen, Ole; Fryatt, Tara; Courtney, Steve; Hallett, Dave; Whittaker, Mark

    2009-08-01

    Fragment-based drug discovery (FBDD) represents a change in strategy from the screening of molecules with higher molecular weights and physical properties more akin to fully drug-like compounds, to the screening of smaller, less complex molecules. This is because it has been recognised that fragment hit molecules can be efficiently grown and optimised into leads, particularly after the binding mode to the target protein has been first determined by 3D structural elucidation, e.g. by NMR or X-ray crystallography. Several studies have shown that medicinal chemistry optimisation of an already drug-like hit or lead compound can result in a final compound with too high molecular weight and lipophilicity. The evolution of a lower molecular weight fragment hit therefore represents an attractive alternative approach to optimisation as it allows better control of compound properties. Computational chemistry can play an important role both prior to a fragment screen, in producing a target focussed fragment library, and post-screening in the evolution of a drug-like molecule from a fragment hit, both with and without the available fragment-target co-complex structure. We will review many of the current developments in the area and illustrate with some recent examples from successful FBDD discovery projects that we have conducted.

  17. A bi-population based scheme for an explicit exploration/exploitation trade-off in dynamic environments

    NASA Astrophysics Data System (ADS)

    Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique

    2017-05-01

    Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.

  18. Cardiac phenotyping in ex vivo murine embryos using microMRI.

    PubMed

    Cleary, Jon O; Price, Anthony N; Thomas, David L; Scambler, Peter J; Kyriakopoulou, Vanessa; McCue, Karen; Schneider, Jürgen E; Ordidge, Roger J; Lythgoe, Mark F

    2009-10-01

    Microscopic MRI (microMRI) is an emerging technique for high-throughput phenotyping of transgenic mouse embryos, and is capable of visualising abnormalities in cardiac development. To identify cardiac defects in embryos, we have optimised embryo preparation and MR acquisition parameters to maximise image quality and assess the phenotypic changes in chromodomain helicase DNA-binding protein 7 (Chd7) transgenic mice. microMRI methods rely on tissue penetration with a gadolinium chelate contrast agent to reduce tissue T(1), thus improving signal-to-noise ratio (SNR) in rapid gradient echo sequences. We investigated 15.5 days post coitum (dpc) wild-type CD-1 embryos fixed in gadolinium-diethylene triamine pentaacetic acid (Gd-DTPA) solutions for either 3 days (2 and 4 mM) or 2 weeks (2, 4, 8 and 16 mM). To assess penetration of the contrast agent into heart tissue and enable image contrast simulations, T(1) and T(*) (2) were measured in heart and background agarose. Compared to 3-day, 2-week fixation showed reduced mean T(1) in the heart at both 2 and 4 mM concentrations (p < 0.0001), resulting in calculated signal gains of 23% (2 mM) and 29% (4 mM). Using T(1) and T(*) (2) values from 2-week concentrations, computer simulation of heart and background signal, and ex vivo 3D gradient echo imaging, we demonstrated that 2-week fixed embryos in 8 mM Gd-DTPA in combination with optimised parameters (TE/TR/alpha/number of averages: 9 ms/20 ms/60 degrees /7) produced the largest SNR in the heart (23.2 +/- 1.0) and heart chamber contrast-to-noise ratio (CNR) (27.1 +/- 1.6). These optimised parameters were then applied to an MRI screen of embryos heterozygous for the gene Chd7, implicated in coloboma of the eye, heart defects, atresia of the choanae, retardation of growth, genital/urinary abnormalities, ear abnormalities and deafness (CHARGE) syndrome (a condition partly characterised by cardiovascular birth defects in humans). A ventricular septal defect was readily identified in the screen, consistent with the human phenotype. (c) 2009 John Wiley & Sons, Ltd.

  19. Pixel-level tunable liquid crystal lenses for auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Li, Kun; Robertson, Brian; Pivnenko, Mike; Chu, Daping; Zhou, Jiong; Yao, Jun

    2014-02-01

    Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results.

  20. Decrements in knee extensor and flexor strength are associated with performance fatigue during simulated basketball game-play in adolescent, male players.

    PubMed

    Scanlan, Aaron T; Fox, Jordan L; Borges, Nattai R; Delextrat, Anne; Spiteri, Tania; Dalbo, Vincent J; Stanton, Robert; Kean, Crystal O

    2018-04-01

    This study quantified lower-limb strength decrements and assessed the relationships between strength decrements and performance fatigue during simulated basketball. Ten adolescent, male basketball players completed a circuit-based, basketball simulation. Sprint and jump performance were assessed during each circuit, with knee flexion and extension peak concentric torques measured at baseline, half-time, and full-time. Decrement scores were calculated for all measures. Mean knee flexor strength decrement was significantly (P < 0.05) related to sprint fatigue in the first half (R = 0.65), with dominant knee flexor strength (R = 0.67) and dominant flexor:extensor strength ratio (R = 0.77) decrement significantly (P < 0.05) associated with sprint decrement across the entire game. Mean knee extensor strength (R = 0.71), dominant knee flexor strength (R = 0.80), non-dominant knee flexor strength (R = 0.75), mean knee flexor strength (R = 0.81), non-dominant flexor:extensor strength ratio (R = 0.71), and mean flexor:extensor strength ratio (R = 0.70) decrement measures significantly (P < 0.05) influenced jump fatigue during the entire game. Lower-limb strength decrements may exert an important influence on performance fatigue during basketball activity in adolescent, male players. Consequently, training plans should aim to mitigate lower-limb fatigue to optimise sprint and jump performance during game-play.

  1. Simulation of Optimal Decision-Making Under the Impacts of Climate Change.

    PubMed

    Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl

    2017-07-01

    Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.

  2. Potential countersample materials for in vitro simulation wear testing.

    PubMed

    Shortall, Adrian C; Hu, Xiao Q; Marquis, Peter M

    2002-05-01

    Any laboratory investigation of the wear resistance of dental materials needs to consider oral conditions so that in vitro wear results can be correlated with in vivo findings. The choice of the countersample is a critical factor in establishing the pattern of tribological wear and in achieving an efficient in vitro wear testing system. This research investigated the wear behavior and surface characteristics associated with three candidate countersample materials used for in vitro wear testing in order to identify a possible suitable substitute for human dental enamel. Three candidate materials, stainless steel, steatite and dental porcelain were evaluated and compared to human enamel. A variety of factors including hardness, wear surface evolution and frictional coefficients were considered, relative to the tribology of the in vivo situation. The results suggested that the dental porcelain investigated bore the closest similarity to human enamel of the materials investigated. Assessment of potential countersample materials should be based on the essential tribological simulation supported by investigations of mechanical, chemical and structural properties. The selected dental porcelain had the best simulating ability among the three selected countersample materials and this class of material may be considered as a possible countersample material for in vitro wear test purposes. Further studies are required, employing a wider range of dental ceramics, in order to optimise the choice of countersample material for standardized in vitro wear testing.

  3. Computer software tool REALM for sustainable water allocation and management.

    PubMed

    Perera, B J C; James, B; Kularathna, M D U

    2005-12-01

    REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vickery, A.; Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen; Deen, P. P.

    In recent years the use of repetition rate multiplication (RRM) on direct geometry neutron spectrometers has been established and is the common mode of operation on a growing number of instruments. However, the chopper configurations are not ideally optimised for RRM with a resultant 100 fold flux difference across a broad wavelength band. This paper presents chopper configurations that will produce a relative constant (RC) energy resolution and a relative variable (RV) energy resolution for optimised use of RRM. The RC configuration provides an almost uniform ΔE/E for all incident wavelengths and enables an efficient use of time as themore » entire dynamic range is probed with equivalent statistics, ideal for single shot measurements of transient phenomena. The RV energy configuration provides an almost uniform opening time at the sample for all incident wavelengths with three orders of magnitude in time resolution probed for a single European Spallation Source (ESS) period, which is ideal to probe complex relaxational behaviour. These two chopper configurations have been simulated for the Versatile Optimal Resolution direct geometry spectrometer, VOR, that will be built at ESS.« less

  5. SoLid Detector Technology

    NASA Astrophysics Data System (ADS)

    Labare, Mathieu

    2017-09-01

    SoLid is a reactor anti-neutrino experiment where a novel detector is deployed at a minimum distance of 5.5 m from a nuclear reactor core. The purpose of the experiment is three-fold: to search for neutrino oscillations at a very short baseline; to measure the pure 235U neutrino energy spectrum; and to demonstrate the feasibility of neutrino detectors for reactor monitoring. This report presents the unique features of the SoLid detector technology. The technology has been optimised for a high background environment resulting from low overburden and the vicinity of a nuclear reactor. The versatility of the detector technology is demonstrated with a 288 kg detector prototype which was deployed at the BR2 nuclear reactor in 2015. The data presented includes both reactor on, reactor off and calibration measurements. The measurement results are compared with Monte Carlo simulations. The 1.6t SoLid detector is currently under construction, with an optimised design and upgraded material technology to enhance the detector capabilities. Its deployement on site is planned for the begin of 2017 and offers the prospect to resolve the reactor anomaly within about two years.

  6. Contribution of 3-D time-lapse ERT to the study of leachate recirculation in a landfill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clement, R., E-mail: remi.clement@hmg.inpg.fr; Grenoble Universite, B.P. 53, 38041 Grenoble Cedex 9; Oxarango, L.

    2011-03-15

    Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. It aims at increasing the moisture content to optimise the biodegradation. Because waste is a very heterogeneous and anisotropic porous media, the geometry of the leachate plume recirculation is difficult to delineate from the surface at the scale of the bioreactor site. In this study, 3-D time-lapse electrical resistivity tomography (ERT) was used to obtain useful information for understanding leachate recirculation hydrodynamics. The ERT inversion methodology and the electrode arrays were optimised using numerical modelling simulating a 3-D leachate injection scenario. Time-lapse ERT was subsequentlymore » applied at the field scale during an experimental injection. We compared ERT images with injected volumes to evaluate the sensitivity of time-lapse ERT to delineate the plume migration. The results show that time-lapse ERT can accomplish the following: (i) accurately locate the injection plume, delineating its depth and lateral extension; (ii) be used to estimate some hydraulic properties of waste.« less

  7. Design and optimisation of novel configurations of stormwater constructed wetlands

    NASA Astrophysics Data System (ADS)

    Kiiza, Christopher

    2017-04-01

    Constructed wetlands (CWs) are recognised as a cost-effective technology for wastewater treatment. CWs have been deployed and could be retrofitted into existing urban drainage systems to prevent surface water pollution, attenuate floods and act as sources for reusable water. However, there exist numerous criteria for design configuration and operation of CWs. The aim of the study was to examine effects of design and operational variables on performance of CWs. To achieve this, 8 novel designs of vertical flow CWs were continuously operated and monitored (weekly) for 2years. Pollutant removal efficiency in each CW unit was evaluated from physico-chemical analyses of influent and effluent water samples. Hybrid optimised multi-layer perceptron artificial neural networks (MLP ANNs) were applied to simulate treatment efficiency in the CWs. Subsequently, predictive and analytical models were developed for each design unit. Results show models have sound generalisation abilities; with various design configurations and operational variables influencing performance of CWs. Although some design configurations attained faster and higher removal efficiencies than others; all 8 CW designs produced effluents permissible for discharge into watercourses with strict regulatory standards.

  8. CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.

  9. Geant4-DNA: overview and recent developments

    NASA Astrophysics Data System (ADS)

    Štěpán, Václav

    Space travel and high altitude flights are inherently associated with prolonged exposure to cosmic and solar radiation. Understanding and simulation of radiation action on cellular and subcellular level contributes to precise assessment of the associated health risks and remains a challenge of today’s radiobiology research. The Geant4-DNA project (http://geant4-dna.org) aims at developing an experimentally validated simulation platform for modelling of the damage induced by ionizing radiation at DNA level. The platform is based on the Geant4 Monte Carlo simulation toolkit. This project extends specific functionalities of Geant4 in following areas: The step-by-step single scattering modelling of elementary physical interactions of electrons, protons, alpha particles and light ions with liquid water and DNA bases, for the so-called “physical” stage. The modelling of the “physico-chemical and chemical” stages corresponding to the production, the diffusion, the chemical reactions occurring between chemical species produced by water radiolysis, and to the radical attack on the biological targets. Physical and chemical stage simulations are combined with biological target models on several scales, from DNA double helix, through nucleosome, to chromatin segments and cell geometries. In addition, data mining clustering algorithms have been developed and optimised for the purpose of DNA damage scoring in simulated tracks. Experimental measurements on pBR322 plasmid DNA are being carried out in order to validate the Geant4-DNA models. The plasmid DNA has been irradiated in dry conditions by protons with energies from 100 keV to 30 MeV and in aqueous conditions, with and without scavengers, by 30 MeV protons, 290 MeV/u carbon and 500 MeV/u iron ions. Agarose gel electrophoresis combined with enzymatic treatment has been used to measure the resulting DNA damage. An overview of the developments undertaken by the Geant4-DNA collaboration including a description of software already available for download, as well as future perspectives, will be presented, on behalf of the Geant4-DNA Collaboration.

  10. The UPSCALE project: a large simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane

    2014-05-01

    The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .

  11. Optimised 'on demand' protein arraying from DNA by cell free expression with the 'DNA to Protein Array' (DAPA) technology.

    PubMed

    Schmidt, Ronny; Cook, Elizabeth A; Kastelic, Damjana; Taussig, Michael J; Stoevesandt, Oda

    2013-08-02

    We have previously described a protein arraying process based on cell free expression from DNA template arrays (DNA Array to Protein Array, DAPA). Here, we have investigated the influence of different array support coatings (Ni-NTA, Epoxy, 3D-Epoxy and Polyethylene glycol methacrylate (PEGMA)). Their optimal combination yields an increased amount of detected protein and an optimised spot morphology on the resulting protein array compared to the previously published protocol. The specificity of protein capture was improved using a tag-specific capture antibody on a protein repellent surface coating. The conditions for protein expression were optimised to yield the maximum amount of protein or the best detection results using specific monoclonal antibodies or a scaffold binder against the expressed targets. The optimised DAPA system was able to increase by threefold the expression of a representative model protein while conserving recognition by a specific antibody. The amount of expressed protein in DAPA was comparable to those of classically spotted protein arrays. Reaction conditions can be tailored to suit the application of interest. DAPA represents a cost effective, easy and convenient way of producing protein arrays on demand. The reported work is expected to facilitate the application of DAPA for personalized medicine and screening purposes. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist

    PubMed Central

    Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N

    2012-01-01

    Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933

  13. Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.

    PubMed

    Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N

    2012-10-01

    The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.

  14. Optimisation of strain selection in evolutionary continuous culture

    NASA Astrophysics Data System (ADS)

    Bayen, T.; Mairet, F.

    2017-12-01

    In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.

  15. First on-sky results of a neural network based tomographic reconstructor: Carmen on Canary

    NASA Astrophysics Data System (ADS)

    Osborn, J.; Guzman, D.; de Cos Juez, F. J.; Basden, A. G.; Morris, T. J.; Gendron, É.; Butterley, T.; Myers, R. M.; Guesalaga, A.; Sanchez Lasheras, F.; Gomez Victoria, M.; Sánchez Rodríguez, M. L.; Gratadour, D.; Rousset, G.

    2014-07-01

    We present on-sky results obtained with Carmen, an artificial neural network tomographic reconstructor. It was tested during two nights in July 2013 on Canary, an AO demonstrator on the William Hershel Telescope. Carmen is trained during the day on the Canary calibration bench. This training regime ensures that Carmen is entirely flexible in terms of atmospheric turbulence profile, negating any need to re-optimise the reconstructor in changing atmospheric conditions. Carmen was run in short bursts, interlaced with an optimised Learn and Apply reconstructor. We found the performance of Carmen to be approximately 5% lower than that of Learn and Apply.

  16. Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters

    NASA Astrophysics Data System (ADS)

    Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.

    2016-12-01

    Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.

  17. Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters

    NASA Astrophysics Data System (ADS)

    Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.

    2016-02-01

    Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.

  18. Detecting future performance of the reservoirs under the changing climate

    NASA Astrophysics Data System (ADS)

    Biglarbeigi, Pardis; Strong, W. Alan; Griffiths, Philip

    2017-04-01

    Climate change is expected to affect the hydrological cycle resulting in changes in rainfall patterns, seasonal variations as well as flooding and drought. Also, changes in the hydrologic regime of the rivers are another anticipated effects of climate change. This climatic variability put pressure on renewable water resources with its increase in some regions, decrease in the others and high uncertainties in every region. As a result of the pressure of climate change on water resources, the operation of reservoir and dams is expected to experience uncertainties in different aspects such as supplying water and controlling the flood. In this study, we model two hypothetical dams on different streamflows, based on the water needs of 20'000 and 100'000 people. UK, as a country that suffered from several flooding events during the past years, and Iran, as a country with severe water scarcity, are chosen as the nations under study. For this study, the hypothetical modeled dam is located on three streamflows in each nation. Then, the mass-balance model of the system is optimised over 25 years of historical data, considering two objectives: 1) Minimisation of the water deficit in different sectors (agricultural, domestic and industrial) and 2) Minimisation of the flooding around the reservoir catchment. The optimised policies are simulated into the model again under different climate change and demographic scenarios to obtain the Resilience, Reliability and Vulnerability (RRV indices) of the system. In order to gain this goal, two different set of scenarios are introduced; the first set is the scenarios introduced in IPCC assessment in its Special Report on Emission Scenarios (SRES). The second set is introduced as a Monte Carlo simulation of demographic and temperature scenarios. Demographic scenarios are defined as the UN's estimation of population based on age, sex, fertility, mortality and migration rates with a 2-year frequency. Temperature scenarios, on the other hand, are defined based on the target of COP21, Paris which proposed to keep "the global temperature increase well below 2 degrees Celsius, while urging efforts to limit the increase to 1.5 degrees", as well as temperatures higher than this limit to better address the effects of climate change. Numerical results of the proposed model are anticipated to represent the performance of the system by the year 2100 through RRV indices. RRV metrices are effective means of quantitative estimation of climate change impacts on reservoir system in order to obtain the potential policies to solve the future water supply issues.

  19. Simulation modelling as a tool for knowledge mobilisation in health policy settings: a case study protocol.

    PubMed

    Freebairn, L; Atkinson, J; Kelly, P; McDonnell, G; Rychetnik, L

    2016-09-21

    Evidence-informed decision-making is essential to ensure that health programs and services are effective and offer value for money; however, barriers to the use of evidence persist. Emerging systems science approaches and advances in technology are providing new methods and tools to facilitate evidence-based decision-making. Simulation modelling offers a unique tool for synthesising and leveraging existing evidence, data and expert local knowledge to examine, in a robust, low risk and low cost way, the likely impact of alternative policy and service provision scenarios. This case study will evaluate participatory simulation modelling to inform the prevention and management of gestational diabetes mellitus (GDM). The risks associated with GDM are well recognised; however, debate remains regarding diagnostic thresholds and whether screening and treatment to reduce maternal glucose levels reduce the associated risks. A diagnosis of GDM may provide a leverage point for multidisciplinary lifestyle modification interventions. This research will apply and evaluate a simulation modelling approach to understand the complex interrelation of factors that drive GDM rates, test options for screening and interventions, and optimise the use of evidence to inform policy and program decision-making. The study design will use mixed methods to achieve the objectives. Policy, clinical practice and research experts will work collaboratively to develop, test and validate a simulation model of GDM in the Australian Capital Territory (ACT). The model will be applied to support evidence-informed policy dialogues with diverse stakeholders for the management of GDM in the ACT. Qualitative methods will be used to evaluate simulation modelling as an evidence synthesis tool to support evidence-based decision-making. Interviews and analysis of workshop recordings will focus on the participants' engagement in the modelling process; perceived value of the participatory process, perceived commitment, influence and confidence of stakeholders in implementing policy and program decisions identified in the modelling process; and the impact of the process in terms of policy and program change. The study will generate empirical evidence on the feasibility and potential value of simulation modelling to support knowledge mobilisation and consensus building in health settings.

  20. Selecting a climate model subset to optimise key ensemble properties

    NASA Astrophysics Data System (ADS)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

Top